java-neo4j-使用neo4j剩余图数据库进行批量插入

我正在使用2.0.1版本.

我有成千上万个需要插入的节点.我的neo4j graph数据库位于独立服务器上,我通过neo4j rest graph数据库库使用RestApi来实现这一点.

但是,我面临性能缓慢的问题.我将查询分为几批,在一个http调用中发送了500条cypher语句.我得到的结果是:

10:38:10.984 INFO commit
10:38:13.161 INFO commit
10:38:13.277 INFO commit
10:38:15.132 INFO commit
10:38:15.218 INFO commit
10:38:17.288 INFO commit
10:38:19.488 INFO commit
10:38:22.020 INFO commit
10:38:24.806 INFO commit
10:38:27.848 INFO commit
10:38:31.172 INFO commit
10:38:34.767 INFO commit
10:38:38.661 INFO commit

等等.
我正在使用的查询如下:

MERGE (a{main:{val1},prop2:{val2}}) MERGE (b{main:{val3}}) CREATE UNIQUE (a)-[r:relationshipname]-(b);

我的代码是这样的:

private RestAPI restAPI;
private RestCypherQueryEngine engine;
private GraphDatabaseService graphDB = new RestGraphDatabase("http://localdomain.com:7474/db/data/");

restAPI = ((RestGraphDatabase) graphDB).getRestAPI();
engine = new RestCypherQueryEngine(restAPI);

    Transaction tx = graphDB.getRestAPI().beginTx();

    try {
        int ctr = 0;
        while (isExists) {
            ctr++;
            //excute query here through engine.query()
            if (ctr % 500 == 0) {
                tx.success();
                tx.close();
                tx = graphDB.getRestAPI().beginTx();
                LOGGER.info("commit");
            }
        }
        tx.success();
    } catch (FileNotFoundException | NumberFormatException | ArrayIndexOutOfBoundsException e) {
        tx.failure();
    } finally {
        tx.close();            
    }

谢谢!

更新的基准.
很抱歉,我发布的基准测试不够准确,并且不能用于500个查询.我的ctr变量实际上并不是指密码查询的数量.

所以现在,我每3秒有500个查询,而且3秒也在不断增加.与嵌入式neo4j相比,它仍然很慢.

解决方法:

如果您必须能够使用Neo4j 2.1.0-M01(尚未在产品中使用它!),您可以从新功能中受益.如果您要像这样创建/生成CSV文件:

val1,val2,val3
a_value,another_value,yet_another_value
a,b,c
....

您只需要启动以下代码:

final GraphDatabaseService graphDB = new RestGraphDatabase("http://server:7474/db/data/");
final RestAPI restAPI = ((RestGraphDatabase) graphDB).getRestAPI();
final RestCypherQueryEngine engine = new RestCypherQueryEngine(restAPI);
final String filePath = "file://C:/your_file_path.csv";
engine.query("USING PERIODIC COMMIT 500 LOAD CSV WITH HEADERS FROM '" + filePath
    + "' AS csv MERGE (a{main:csv.val1,prop2:csv.val2}) MERGE (b{main:csv.val3})"
    + " CREATE UNIQUE (a)-[r:relationshipname]->(b);", null);

您必须确保可以从安装服务器的计算机*问该文件.

看一下在服务器上为您完成的my server plugin.如果构建此文件并将其放入plugins文件夹,则可以按以下方式使用java中的插件:

final RestAPI restAPI = new RestAPIFacade("http://server:7474/db/data");
final RequestResult result = restAPI.execute(RequestType.POST, "ext/CSVBatchImport/graphdb/csv_batch_import",
    new HashMap<String, Object>() {
        {
            put("path", "file://C:/.../neo4j.csv");
        }
    });

编辑:

您还可以在Java REST包装器中使用BatchCallback来提高性能,并且它也会删除事务样板代码.您可以编写类似于以下内容的脚本:

final RestAPI restAPI = new RestAPIFacade("http://server:7474/db/data");
int counter = 0;
List<Map<String, Object>> statements = new ArrayList<>();
while (isExists) {
    statements.add(new HashMap<String, Object>() {
        {
            put("val1", "abc");
            put("val2", "abc");
            put("val3", "abc");
        }
    });
    if (++counter % 500 == 0) {
        restAPI.executeBatch(new Process(statements));
        statements = new ArrayList<>();
    }
}

static class Process implements BatchCallback<Object> {

    private static final String QUERY = "MERGE (a{main:{val1},prop2:{val2}}) MERGE (b{main:{val3}}) CREATE UNIQUE (a)-[r:relationshipname]-(b);";

    private List<Map<String, Object>> params;

    Process(final List<Map<String, Object>> params) {
        this.params = params;
    }

    @Override
    public Object recordBatch(final RestAPI restApi) {
        for (final Map<String, Object> param : params) {
            restApi.query(QUERY, param);
        }
        return null;
    }    
}
上一篇:Java-Mac上的Neo4j:防止org.neo4j.server.Bootstrapper出现在底座中


下一篇:WCF实现方法重载