正在做的一个项目数据量较大,每天会有上千万条数据进行入库,原来定死的是没数据库中保存23小时的数据。这样我们在给相关表建立24个分区,每个小时对应一个分区,每到半点的时候清空指定分区(当前时间向前推24小时对应的那个分区)。这个利用Mysql的event很好实现。
事件定时调用存储过程 ,存储过程内执行清空分区的操作:
DELIMITER $$ ALTER DEFINER=`root`@`172.17.5.102` EVENT `EV_TRUNCATE_STREAM_INDEX` ON SCHEDULE EVERY 1 HOUR STARTS ‘2000-01-01 00:30:00‘ ON COMPLETION NOT PRESERVE ENABLE DO BEGIN CALL pro_truncate_stream_index(); END$$ DELIMITER ;
DELIMITER $$ USE `IPVIEW`$$ DROP PROCEDURE IF EXISTS `pro_truncate_stream_index`$$ CREATE DEFINER=`root`@`172.17.5.102` PROCEDURE `pro_truncate_stream_index`() BEGIN SET @partName=CONCAT(‘p‘,HOUR(DATE_ADD(NOW(), INTERVAL 1 HOUR))); SET @SQL_STR=CONCAT(‘ALTER TABLE T_STREAM TRUNCATE PARTITION ‘,@partName); PREPARE stmt8 FROM @SQL_STR; EXECUTE stmt8; END$$ DELIMITER ;
所以,首先我们的建表方式要改变,要每小时删除一个分区,新建一个分区,还是采用list分区模式,分区名用当前时间的“yyyyMMddHH”命名(当然了,Mysql的分区名不能是全数字,所以我们在前面加个p)
建表:
/*Table structure for table `T_STREAM` */ DROP TABLE IF EXISTS `T_STREAM`; CREATE TABLE `T_STREAM` ( `ID` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘主键‘, `StreamID` bigint(20) NOT NULL COMMENT ‘流标识‘, `FAvailability` int(11) DEFAULT NULL COMMENT ‘可用度‘, `Bandwidth` int(11) DEFAULT NULL COMMENT ‘带宽‘, `ValidBandwidth` int(11) DEFAULT NULL COMMENT ‘有效带宽‘, `Ts_Missing` int(11) DEFAULT NULL COMMENT ‘TS丢包率‘, `MDI_DF` int(11) DEFAULT NULL COMMENT ‘MDI-DF‘, `MDI_MLR` int(11) DEFAULT NULL COMMENT ‘MDI-MLR‘, `Delay_Time` int(11) DEFAULT NULL COMMENT ‘流响应时延‘, `Cl_date` varchar(50) DEFAULT NULL COMMENT ‘测量时间‘, `IPIntervalAvg` int(11) DEFAULT NULL COMMENT ‘平均包间隔‘, `IPIntervalMax` int(11) DEFAULT NULL COMMENT ‘最大包间隔‘, `IPJitter` int(11) DEFAULT NULL COMMENT ‘IP包抖动‘, `MLT15` int(11) DEFAULT NULL COMMENT ‘MLT15‘, `MLT24` int(11) DEFAULT NULL COMMENT ‘MLT24‘, `MLS` int(11) DEFAULT NULL COMMENT ‘MLS‘, `SliceNum` int(11) DEFAULT NULL COMMENT ‘总分片数‘, `CachedTime` int(11) DEFAULT NULL COMMENT ‘缓存时长(秒)‘, `StuckTime` int(11) DEFAULT NULL COMMENT ‘卡屏时长(秒)‘, `GetSliceErr` int(11) DEFAULT NULL COMMENT ‘分片请求错误数‘, `RetransmitCount` int(11) DEFAULT NULL COMMENT ‘重传数‘, `RetransmitRate` int(11) DEFAULT NULL COMMENT ‘重传率‘, `RepeatCount` int(11) DEFAULT NULL COMMENT ‘重复数‘, `RepeatRate` int(11) DEFAULT NULL COMMENT ‘重复率‘, `VideoBandwidth` int(11) DEFAULT NULL COMMENT ‘视频带宽‘, `AudioBandwidth` int(11) DEFAULT NULL COMMENT ‘音频带宽‘, `PID_COUNT` int(11) DEFAULT NULL COMMENT ‘PID数目‘, `TS_CC` int(11) DEFAULT NULL COMMENT ‘TS_CC‘, `MLS_15` int(11) DEFAULT NULL COMMENT ‘MLS-15‘, `MLS_24` int(11) DEFAULT NULL COMMENT ‘MLS-24‘, `TCC_15` int(11) DEFAULT NULL COMMENT ‘TCC-15‘, `TCC_24` int(11) DEFAULT NULL COMMENT ‘TCC-24‘, `RTP_DROP` int(11) DEFAULT NULL COMMENT ‘rtp丢包指数‘, `DownloadRate` int(11) DEFAULT NULL COMMENT ‘下载速率‘, `DownloadPercentage` int(11) DEFAULT NULL COMMENT ‘下载百分比‘, `ChaosCount` int(11) DEFAULT NULL COMMENT ‘乱序包数‘, `LowWindow` int(11) DEFAULT NULL COMMENT ‘低窗口门限‘, `Part` int(11) NOT NULL COMMENT ‘分区字段‘, PRIMARY KEY (`ID`,`Part`), KEY `cl_date_index` (`Cl_date`), KEY `stream_id_index` (`StreamID`) ) ENGINE=InnoDB AUTO_INCREMENT=3402502 DEFAULT CHARSET=utf8 /*!50100 PARTITION BY LIST (Part) (PARTITION p VALUES IN (0) ENGINE = InnoDB) */;
CREATE TABLE `T_LIVE_FAULT` ( `ID` int(11) NOT NULL AUTO_INCREMENT COMMENT ‘标识‘, `StreamId` int(11) DEFAULT NULL COMMENT ‘流ID‘, `Fault_ID` int(11) DEFAULT NULL COMMENT ‘故障类型ID‘, `Fault_Status` int(11) DEFAULT NULL COMMENT ‘告警状态‘, `StartTime` char(50) CHARACTER SET utf8 DEFAULT NULL COMMENT ‘开始时间‘, `Endtime` char(50) CHARACTER SET utf8 DEFAULT NULL COMMENT ‘恢复时间‘, `Fault_NO` int(11) DEFAULT NULL COMMENT ‘告警号‘, `Fault_Desc` varchar(100) DEFAULT NULL COMMENT ‘告警描述‘, `Part` int(11) NOT NULL COMMENT ‘分区字段‘, PRIMARY KEY (`ID`,`Part`), KEY `INDEX1` (`StreamId`,`Fault_NO`), KEY `StartTime` (`StartTime`) ) ENGINE=InnoDB AUTO_INCREMENT=1137503 DEFAULT CHARSET=gb2312 /*!50100 PARTITION BY LIST (Part) (PARTITION p VALUES IN (0) ENGINE = InnoDB) */;
CREATE TABLE `T_STREAM_MAP` ( `ID` bigint(20) NOT NULL COMMENT ‘主键‘, `StartTime` varchar(20) NOT NULL COMMENT ‘流开始时间‘, `EndTime` varchar(20) NOT NULL COMMENT ‘流结束时间‘, `SourceIP` varchar(20) CHARACTER SET latin1 DEFAULT NULL COMMENT ‘源IP‘, `SourcePort` int(11) DEFAULT NULL COMMENT ‘源端口‘, `DestIP` varchar(20) CHARACTER SET latin1 DEFAULT NULL COMMENT ‘目的IP‘, `DestPort` int(11) DEFAULT NULL COMMENT ‘目的端口‘, `VLAN` int(11) DEFAULT NULL COMMENT ‘VLAN‘, `NetworkPort` int(11) DEFAULT NULL COMMENT ‘监测网口‘, `StreamName` varchar(20) DEFAULT NULL COMMENT ‘流别名‘, `StreamType` int(1) DEFAULT NULL COMMENT ‘流类型‘, `Protocol` varchar(20) DEFAULT NULL COMMENT ‘协议类型‘, `URL` varchar(100) DEFAULT NULL COMMENT ‘URL‘, `WorkMode` int(11) DEFAULT NULL COMMENT ‘工作模式‘, `MediaType` varchar(10) DEFAULT NULL COMMENT ‘媒体类型‘, PRIMARY KEY (`ID`), KEY `index_map_sourceip` (`SourceIP`), KEY `index_map_streamType` (`StreamType`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*Table structure for table `T_STREAM_INDICATOR` */ DROP TABLE IF EXISTS `T_STREAM_INDICATOR`; CREATE TABLE `T_STREAM_INDICATOR` ( `INDICATOR_ID` int(2) NOT NULL AUTO_INCREMENT COMMENT ‘指标ID‘, `INDICATOR_NAME` varchar(20) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT ‘指标名称‘, `STREAM_TYPE_ID` int(1) DEFAULT NULL COMMENT ‘流类型ID ‘, `INDICATOR_SORT` int(1) DEFAULT NULL COMMENT ‘排序方式,0代表升序,1代表降序‘, `INDICATOR_TYPE` varchar(20) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT ‘对应Stream表的字段名称‘, PRIMARY KEY (`INDICATOR_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
T_STREAM_MAP表的数据不算多,没有建分区,只是根据表中’结束时间‘字段进行delete。
ExecutSqlLog表用来记录数据维护日志的。
项目启动时加载的servlet:
<servlet> <servlet-name>DDL_TABLEServlet</servlet-name> <servlet-class>com.bohui.ipview.common.DDL_TABLEServlet</servlet-class> <load-on-startup>0</load-on-startup> </servlet> <servlet-mapping> <servlet-name>DDL_TABLEServlet</servlet-name> <url-pattern>/DDL_TABLEServlet</url-pattern> </servlet-mapping>
servlet代码:
package com.bohui.ipview.common; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.Properties; import java.util.Timer; import java.util.TimerTask; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import org.jdom.Document; import org.jdom.Element; import org.jdom.JDOMException; import org.jdom.input.SAXBuilder; import com.bohui.ipview.util.DateTimeUtil; /** * 随着web容器的启动而加载 * 每隔30s扫描一下当前时间,当时间为半点的时候: * 查询指标表与告警记录表的现有分区,通过与当前时间的比对: * 增加分区(保持数据库内存在当前时间分区以及未来四小时的分区) * 删除分区(当前时间向前推N个小时,N个小时之前的分区全部删除,N来自于设备配置页面的数据保存时间) * * 清除流映射表数据: * 删除流映射表中(结束时间+数据保存时间)<当前时间的数据。-----delete方式 * @author caohaicheng * */ public class DDL_TABLEServlet extends HttpServlet { static Timer t1 = null; ArrayList<String> streamPartList=new ArrayList<String>(); ArrayList<String> faultPartList=new ArrayList<String>(); Boolean flag=true; String path=""; //注意点:Mysql不能删除所有的分区,建库的时候需要有一个初始分区 public void init(ServletConfig config) throws ServletException { CommonUtil.initDDLServlet=true; path=config.getServletContext().getRealPath("/") + "WEB-INF" + File.separator+ "config" + File.separator + "BSHConfig.properties"; Properties prop = new Properties();// 属性集合对象 FileInputStream fis; try { fis = new FileInputStream(path); prop.load(fis);// 将属性文件流装载到Properties对象中 fis.close();// 关闭流 } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } CommonUtil.IP=prop.getProperty("PlatformIP"); t1 = new Timer(false); t1.schedule(new TimerTask() { public void run() { //String ip=prop.getProperty("PlatformIP"); //String ip=DataChache.getInitData("PlatformIP"); //数据库访问路径 DBConnection.url="jdbc:mysql://"+CommonUtil.IP+":3306/IPVIEW?user=root&password=123456"; Date dt=new Date(); String executDate=DateTimeUtil.nowToString(); //每当半点的时候执行删除分区以及增加分区的操作 int m= dt.getMinutes();// if(m==30){ flag=true; } if(flag){ flag=false; //清空list streamPartList.clear(); faultPartList.clear(); //查询下数据库表中的所有分区 String sql = "SELECT PARTITION_NAME FROM information_schema.PARTITIONS WHERE TABLE_SCHEMA=‘IPVIEW‘ AND TABLE_NAME=‘T_STREAM‘ ORDER BY PARTITION_ORDINAL_POSITION "; String sql1 = "SELECT PARTITION_NAME FROM information_schema.PARTITIONS WHERE TABLE_SCHEMA=‘IPVIEW‘ AND TABLE_NAME=‘T_LIVE_FAULT‘ ORDER BY PARTITION_ORDINAL_POSITION "; String[] params = {}; try { ResultSet rs = DBConnection.execQuery(sql, params); ResultSet rs1 = DBConnection.execQuery(sql1, params); while(rs.next()){ streamPartList.add(rs.getString(1)); } while(rs1.next()){ faultPartList.add(rs1.getString(1)); } DBConnection.ps.close(); DBConnection.rs.close(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } //判断需要删除 和 需要增加哪些分区 execSql List<String> addPartSql=new ArrayList<String>(); List<String> dropPartSql=new ArrayList<String>(); for(int i=0;i<5;i++){ String partName=DateTimeUtil.dateAddHours(i, "yyyyMMddHH"); partName="p"+partName; //判断数据库是否包含这5个分区,如果不包含那就创建 if(!streamPartList.contains(partName)){ String addStreamPartSql="ALTER TABLE T_STREAM ADD PARTITION (PARTITION "+partName+" VALUES IN ("+partName.substring(1)+"))"; String logSql="INSERT INTO ExecutSqlLog(ExecutTime,ExecutSql,ExecutType) VALUES (‘"+executDate+"‘,‘"+addStreamPartSql+"‘,0);"; addPartSql.add(addStreamPartSql); addPartSql.add(logSql); } if(!faultPartList.contains(partName)){ String addFaultPartSql="ALTER TABLE T_LIVE_FAULT ADD PARTITION (PARTITION "+partName+" VALUES IN ("+partName.substring(1)+"))"; String logSql="INSERT INTO ExecutSqlLog(ExecutTime,ExecutSql,ExecutType) VALUES (‘"+executDate+"‘,‘"+addFaultPartSql+"‘,2);"; addPartSql.add(addFaultPartSql); addPartSql.add(logSql); } } DBConnection.execSql(addPartSql);//批量执行增加分区的sql语句(包含指标表与告警记录表) //读取数据保存时间dataSaveTime File file = new File(CommonUtil.webAppPath + "resources" + File.separator + "config" + File.separator + "DeviceConfig.xml"); SAXBuilder saxBuilder = new SAXBuilder(); Document xmldoc = new Document(); try { xmldoc = saxBuilder.build(file); } catch (JDOMException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } Element root = xmldoc.getRootElement(); Element BSHConfig=root.getChild("BSHConfig"); Element DBInterval=BSHConfig.getChild("DBInterval"); int dataSaveTime=Integer.parseInt(DBInterval.getAttributeValue("DataSaveTime")); //将分区名称转换成时间格式 String part=DateTimeUtil.dateAddHours((-dataSaveTime), "yyyyMMddHH");//设备配置页面读取数据保存的时间,暂时24是写死的. for(int i=1;i<streamPartList.size();i++)//建表初始分区不参与计算 { if(part.compareTo(streamPartList.get(i).substring(1)) > 0){ String dropPartStream="ALTER TABLE T_STREAM DROP PARTITION "+streamPartList.get(i)+" "; dropPartSql.add(dropPartStream); String logSql="INSERT INTO ExecutSqlLog(ExecutTime,ExecutSql,ExecutType) VALUES (‘"+executDate+"‘,‘"+dropPartStream+"‘,1);"; dropPartSql.add(logSql); } } for(int i=1;i<faultPartList.size();i++)//建表初始分区不参与计算 { if(part.compareTo(faultPartList.get(i).substring(1)) > 0){ String dropPartSqlFault="ALTER TABLE T_LIVE_FAULT DROP PARTITION "+faultPartList.get(i)+" "; dropPartSql.add(dropPartSqlFault); String logSql="INSERT INTO ExecutSqlLog(ExecutTime,ExecutSql,ExecutType) VALUES (‘"+executDate+"‘,‘"+dropPartSqlFault+"‘,3);"; dropPartSql.add(logSql); } } DBConnection.execSql(dropPartSql);//批量执行删除分区的sql语句(包含指标表与告警记录表) //清理T_STREAM_MAP表 Date dt1=new Date();//当前时间 String dtStr=DateTimeUtil.dateAddHours((-dataSaveTime), "yyyy-MM-dd HH:mm:ss");//当前时间-数据保存时间 String deleteStreamMapSql=" DELETE FROM T_STREAM_MAP WHERE EndTime< ‘"+dtStr +"‘ AND EndTime<>‘‘"; String delLogSql="INSERT INTO ExecutSqlLog(ExecutTime,ExecutSql,ExecutType) VALUES (‘"+executDate+"‘,\"DELETE FROM T_STREAM_MAP WHERE EndTime< ‘"+dtStr +"‘ AND EndTime<>‘‘\",4);"; List<String> deleteStreamMapSqlList=new ArrayList<String>(); deleteStreamMapSqlList.add(deleteStreamMapSql); deleteStreamMapSqlList.add(delLogSql); DBConnection.execSql(deleteStreamMapSqlList);//执行sql语句 } } }, 0, 30000); } }代码中需要连接数据库 ,读取BSHConfig.properties文件只是为了读取出数据库的ip地址。
读取xml文件是为了读取出用户在web页面配置的数据保存时间,由于Mysql最多支持1024个分区,所以理论上用户配置的小时数应该小于1024.
公用的数据库连接类:
package com.bohui.ipview.common; import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.List; import javax.naming.Context; import javax.naming.InitialContext; import javax.sql.DataSource; /** * 用于JDBC操作数据库的共通类 * * @author * @version 1.0.1 */ public class DBConnection { /** 数据源 */ private static DataSource dataSource; /** 数据库连接对象 */ private static Connection connection; /** 数据库操作对象 */ public static PreparedStatement ps; /** 数据库操作对象 */ public static Statement statement; /** 返回的数据结果集对象 */ public static ResultSet rs; /** 是否自动提交事务,默认为true,如果该值为false则需要手动提交事务 */ private static boolean autoCommit = true; /** 数据库连接是否已经打开 */ private static boolean openConnection; /** JNDI名称 */ private static String jndiName; /** 数据库驱动 */ private static String driver="com.mysql.jdbc.Driver"; /** 数据库访问地址 */ public static String url=""; /** 用户名 */ private static String user; /** 密码 */ private static String pwd; /** * 打开数据库连接并创建数据库连接对象<br/> * 支持通过ICO注入数据源、数据库驱动、JNDI名称、数据库访问地址和用户名、密码 * * @return boolean true:连接成功,false:连接失败 */ public static boolean openConnection() { /** * 通过数据源来获取数据库连接对象 */ if (dataSource != null) { try { connection = dataSource.getConnection(); // 数据库连接已经打开 openConnection = true; } catch (SQLException e) { closeAll(); System.out.println("从数据源获取数据库连接失败!"); throw new RuntimeException(e); } return openConnection; } /** * 通过JNDI来获取数据库连接对象 */ if (jndiName != null) { try { Context initContext = new InitialContext(); dataSource = (DataSource) initContext.lookup(jndiName); connection = dataSource.getConnection(); // 数据库连接已经打开 openConnection = true; } catch (Exception e) { closeAll(); System.out.println("从JNDI获取数据库连接失败!"); throw new RuntimeException(e); } return openConnection; } /** * 通过数据库驱动、数据库访问地址、用户名、密码来获取数据库连接对象 */ try { Class.forName(driver); connection= DriverManager.getConnection(url); // 数据库连接已经打开 openConnection = true; CommonUtil.initConnection=true; } catch (Exception e) { closeAll(); System.out.println("数据库连接失败!"); throw new RuntimeException(e); } return openConnection; } /** * 执行批量sql语句 * * @param sql * 要执行的SQL语句的集合 * @return boolean true:执行成功,false:执行失败 */ public static boolean execSql(List<?> sql) { return execSql(sql.toArray()); } /** * 执行批量sql语句 * * @param sql * 要执行的SQL语句的字符串数组 * @return boolean true:执行成功,false:执行失败 */ public static boolean execSql(Object[] sql) { boolean flag = false; // 判断连接数据库是否成功 if (openConnection) { for (int i = 0; i < sql.length; i++) { try { ps = connection.prepareStatement(sql[i].toString()); ps.execute(); } catch (SQLException e) { // TODO Auto-generated catch block System.out.println("sql执行异常:"+sql[i]); e.printStackTrace(); closeAll(); return false; } } flag = true; closeAll(); } else { System.out.println("数据库连接对象没有打开!"); } return flag; } /** * 执行数据库查询操作 * * @param sql * 要执行的SQL语句 * @param args * 查询参数列表 * @return ResultSet 返回查询的结果集对象 */ public ResultSet execQuery(String sql, List<?> args) { return execQuery(sql, args.toArray()); } /** * 执行数据库查询操作 * * @param sql * 要执行的SQL语句 * @param args * 查询参数列表 * @return ResultSet 返回查询的结果集对象 */ public static ResultSet execQuery(String sql, Object... args) { rs = null; // 判断连接数据库是否成功 if (openConnection()) { try { ps = connection.prepareStatement(sql); // 设置参数 if (args != null && args.length > 0) { for (int i = 0; i < args.length; i++) { ps.setObject(i + 1, args[i]); } } rs = ps.executeQuery(); } catch (SQLException e) { if (autoCommit) { closeAll(); } System.out.println("SQL:" + sql); throw new RuntimeException(e); } } else { System.out.println("数据库连接对象没有打开!"); } return rs; } /** * 根据标准SQL查询数据库,返回一个int值 * * @param sql * 要执行的SQL语句 * @param args * 查询参数列表 * @return int值,如果出错则返回-1 */ public int findForInt(String sql, Object... args) { ResultSet rs = execQuery(sql, args); int count = -1; try { if (rs != null && rs.next()) { count = rs.getInt(1); } } catch (SQLException e) { throw new RuntimeException(e); } finally { if (autoCommit) { closeAll(); } } return count; } /** * 根据标准SQL查询数据库,返回一个int值 * * @param sql * 要执行的SQL语句 * @param args * 查询参数列表 * @return int值,如果出错则返回-1 */ public int findForInt(String sql, List<?> args) { return findForInt(sql, args.toArray()); } /** * 关闭所有数据库连接对象,连接不关闭 */ public static void closeAll() { if (rs != null || ps != null || statement != null ) { try { if (rs != null) { rs.close(); } } catch (SQLException e) { throw new RuntimeException(e); } finally { rs = null; if (ps != null || statement != null ) { try { if (ps != null && !ps.isClosed()) { ps.close(); } } catch (SQLException e) { throw new RuntimeException(e); } finally { ps = null; if (statement != null) { try { if (statement != null && !statement.isClosed()) { statement.close(); } } catch (SQLException e) { throw new RuntimeException(e); } finally { statement = null; } } } } } } } /** * 提交事务并关闭数据库连接 */ public void commit() { try { if (!autoCommit) { connection.commit(); } } catch (SQLException e) { throw new RuntimeException(e); } finally { autoCommit = true; closeAll(); } } /** * 回滚事务并关闭数据库连接 */ public void rollback() { try { if (!autoCommit) { connection.rollback(); } } catch (SQLException e) { throw new RuntimeException(e); } finally { autoCommit = true; closeAll(); } } public DataSource getDataSource() { return dataSource; } public void setDataSource(DataSource dataSource) { this.dataSource = dataSource; } public String getDriver() { return driver; } public void setDriver(String driver) { this.driver = driver; } public String getUrl() { return url; } public void setUrl(String url) { this.url = url; } public boolean getAutoCommit() { return autoCommit; } public void setAutoCommit(boolean autoCommit) { try { connection.setAutoCommit(autoCommit); } catch (SQLException e) { closeAll(); throw new RuntimeException(e); } this.autoCommit = autoCommit; } public boolean getOpenConnection() { return openConnection; } public String getJndiName() { return jndiName; } public void setJndiName(String jndiName) { this.jndiName = jndiName; } public String getUser() { return user; } public void setUser(String user) { this.user = user; } public String getPwd() { return pwd; } public void setPwd(String pwd) { this.pwd = pwd; } public Connection getConnection() { return connection; } /** * 测试数据库连接是否成功 * * @param args * @throws SQLException */ /* * public static void main(String[] args) throws SQLException { String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"; String url = "jdbc:sqlserver://127.0.0.1:1433;databaseName=demo"; String user = "sa"; String pwd = "sa"; CommonSql commonSql = new CommonSql(driver, url, user, pwd); if (commonSql.openConnection()) { System.out.println("数据库连接成功!"); DatabaseMetaData dbMetaData = commonSql.getConnection() .getMetaData(); System.out .print("当前连接的数据库是:" + dbMetaData.getDatabaseProductName()); System.out.println(" " + dbMetaData.getDatabaseProductVersion()); } else { System.out.println("数据库连接失败!"); } commonSql.closeAll(); }*/ }
利用java的Timer,启动项目的时候扫描下数据库,看未来5小时的分区是否存在,不存在就创建。然后看是否有过期的分区,有的话就grop掉。
然后定时器没30s扫描一下,当处于半点的时候再去扫描数据库执行相关操作。
原文:http://blog.csdn.net/caohaicheng/article/details/22308869