Stepping on the JVM overflow ----- mybatis one or two cache

developed the producer consumer function according to the demand, after the producer reads the data from the Oracle database, it is sent to the consumer through kafka, the consumer from kafka After reading the data, write to the mysql database, the functional requirements are like this, relatively simple, the total amount of 5000W data, read and written to kafka through 3 producers. After the

code is written, after a simple functional test, there is no problem, and it starts running in the test environment. Omega GC overhead limit exceeded occurs when less than 1 hour of data is sent in less than 1 hour. After confirming that there is a problem, an OOM exception occurs every time you send about 150W-200W data.

Next, I used Jprofiler to test the code locally. I found that a lot of char[] and String objects were not released after running for a while, as shown in the figure:

and the size is still increasing, which is the result of OOM. The direct cause, then the question is, what caused so many char[] and String objects?

First check the code,

String topic="testCsrk1";
String clientId="CsrkProducer0";
Int begin=630681;//Initial position 630681 28000000
Int end = 28000000; / / end position 70987838 204138956
Int count=1 ;//loop control condition
Int posi=100; / / one cycle number
Int id=begin;
OracleProducer producer=new OracleProducer(topic,false,clientId);
SqlSession sqlSession=factory.openSession(false);
OCsrkMapper oCsrkMapper = sqlSession.getMapper(OCsrkMapper.class);

Int num=0;
While(count>0) {
Map<String,Object> paramMap=new HashMap<String,Object>();
paramMap.put("id", id);
paramMap.put("posi", posi);
paramMap.put("endid", id+posi);
List<Csrk> list=oCsrkMapper.selectCsrks(paramMap);//Query data from the database
Int size=list.size();
Num+=size;
Logger.info("*******This time the number "+size);
System.out.println("*******Total number of items"+num);
If(id>=end) { //Determine whether the id exceeds the range of the send
System.out.println(" id is " +id);
Count=0;
}
If(size==0) {
Id=id+posi;
}
For(int i=0;i<size;i++) {
Csrk csrk=list.get(i);
If(csrk!=null) {
Producer.send(csrk);//encapsulated kafka producers send data
}
Id=list.get(i).getID();
}
}
Producer.close();
System.out.println("producer closed"); 

does not use char[], the String object in the code is also normal, you have to look for problems from other directions.

Another type of object of this type is probably a cache, mybatis is divided into a level 1 cache and a second level cache. The

level cache scope is SqlSession scope. When the same sql statement is executed twice in the same sqlSession, the first query will write the data in the database to the cache (memory). When the second query is performed, the data is obtained from the cache, and the query is not performed in the underlying database, thereby improving the query efficiency. It should be noted that if SqlSession performs DML operations (additions, deletions, and changes) and submits them to the database, MyBatis will clear the level 1 cache in the SqlSession. The purpose of this is to ensure that the latest information is stored in the cache to avoid Dirty reading phenomenon. When the SqlSession ends, the level 1 cache in the SqlSession does not exist.

The scope of the secondary cache is the same namespace of the mapper. Different sqlSession executes the same sql statement under the same name twice, and the parameters passed to sql are the same, that is, the same sql statement is executed, the first time the execution is completed, the data in the database will be written to the cache. The secondary query will get the data from the cache and no longer go to the underlying database query, thus improving efficiency. meaning that the second level cache is data sharing across SqlSession

Note: Level 1 cache can not be closed

Well, back to the code sqlSession is only created once, so After the level 1 cache is created, it is not clear. Add the following code to clear the level 1 cache to modify

sqlSession.clearCache();
List<Csrk> list=oCsrkMapper.selectCsrks(paramMap);//Check the data from the database 

and use Jprofiler to test. Observe that the size of char[] and String objects is still growing, and the xml file of mybatis is opened. Found that the second level cache

<cache/>

turns off the secondary cache corresponding to sql

useCache="false"

re-executed using Jprofiler, observed, found that the size of char[] and String objects is stable below 10M, continuously running more than 22 million data time Less than 90 minutes, the memory usage is stable below 80M

Follow-up thinking

1. If the xml is not used </cache> tag will not appear this problem

in xml Remove the </cache> tag, remove the useCache="false" and execute it. Observe the size of the char[] and String objects and grow over time.

and add sqlSession.clearCache() before each query; the code is used to clear the previous level 1 cache, observe the size of the char[] and String objects, it is stable and will not change with time. , indicating that the level 1 cache is cleared successfully.