Skip to content

Revamp of the LOAD CSV tutorial #470

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: dev
Choose a base branch
from
Open

Conversation

lidiazuin
Copy link
Contributor

@lidiazuin lidiazuin commented May 6, 2025

Review of the csv-import.adoc page and addition of the csv-file.adoc page for general reference.

Copy link
Contributor

@AlexicaWright AlexicaWright left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This a review of the "Working with CSV files" section. I'll have to review the tutorial separately.


=== Data format

All data from the CSV file is read as a string, so you need to use `toInteger()`, `toFloat()`, `split()`, or similar functions to convert values, when needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
All data from the CSV file is read as a string, so you need to use `toInteger()`, `toFloat()`, `split()`, or similar functions to convert values, when needed.
Neo4j reads all data from the CSV file as a string, for other data types, you need to use `toInteger()`, `toFloat()`, `toBoolean()`, or similar functions to convert data to the appropriate type.

split() doesn't change the data type from string, but splits it into separate entities, so it feels odd to group it with the functions that changes the data type. It's mentioned later though, so maybe it's ok?

=== Field terminator

Also known as delimiter, a field terminator is a character used to separate each field in a CSV file.
In this example, a comma (`,`) is used, but other characters, such as a tab (`\`) or a pipe (`|`) also work and they can be blended:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In this example, a comma (`,`) is used, but other characters, such as a tab (`\`) or a pipe (`|`) also work and they can be blended:
In this example, a comma (`,`) is used, but other characters, such as a tab (`\t`) or a pipe (`|`) also work and they can be blended:

Not sure if we need to escape the tab to make it render?
If you use a tab, the format is called TSV.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the tab is working normally here, both building locally and in Surge. Regarding the TSV file format, do you think it's better to mention it or to remove the tab option as it would make the file a TSV instead of a CSV, which is the topic of this page?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CSV and TSV are both flat files and there is no other difference AFAIK. About the tab, it's not just a forward slash but a t also \t.

For best performance, always `MATCH` and `MERGE` on a single label with the indexed primary-key property.
====

Suppose you use xref:#_converting_data_values[the preceding *companies.csv* file], and now you have a file that contains people and which companies they work for:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Suppose you use xref:#_converting_data_values[the preceding *companies.csv* file], and now you have a file that contains people and which companies they work for:
Suppose that you have another file that contains people and which companies they work for using a reference to the xref:#_converting_data_values[*companies.csv* file:

4,Karen White,1
----

You should also separate node and relationship creation on a separate processing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You should also separate node and relationship creation on a separate processing.
To load these two files and create the appropriate relationships between the people from the `people.csv` file with the companies they work for in the `companies.csv` file, you need to load them both and first create nodes from the files, and then create the relationships between them.
To make this process more efficient, it is recommended to separate these tasks, i.e. create the nodes in one clause per file, and then a separate clause to create the relationships.


[source,cypher,role=noplay]
----
// clear data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary?

MATCH (e:Employee {employeeId: row.employeeId})
MATCH (c:Company {companyId: row.Company})
MERGE (e)-[:WORKS_FOR]->(c)
RETURN *;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is returned here?

Co-authored-by: Jessica Wright <49636617+AlexicaWright@users.noreply.github.com>
Copy link
Contributor

@AlexicaWright AlexicaWright left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some further comments, but we're getting there! Thank you @lidiazuin !

--

Here, the movie and person data (including the IDs) is repeated in different rows every time new information about a particular actor's role is featured.
This sort of duplication compromises the structure of the data, which means you need to xref:#_preparing_the_csv_file[prepare your file] before importing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this can be rephrased a little? The duplication doesn't really compromise the structure of the data in general, does it? Only if you want your data in a graph structure.
Also, the link doesn't work.

Here, the movie and person data (including the IDs) is repeated in different rows every time new information about a particular actor's role is featured.
This sort of duplication compromises the structure of the data, which means you need to xref:#_preparing_the_csv_file[prepare your file] before importing.

== File location
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think this section should move to Using LOAD CSV . This page is all about the actual file, not uploading it.

Comment on lines 182 to 184
* xref:data-import/csv-files.adoc#_cleaning_up[*Cleaning up CSV files*]: see how to use the `LOAD CSV` command to clean up the file while importing.
* xref:data-import/csv-files.adoc#_optimization[*Optimization*]: improve performance when working with large amounts of data or complex loading.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These three are all on the same page. Wouldn't it suffice to say "See xref:data-import/csv-files.adoc[Working with CSV files] to learn more about the structure of data, how to clean it up, and optimize it."?

====
For a more hands-on option, see the available link:https://graphacademy.neo4j.com/categories/?search=import[GraphAcademy courses] on data import.
====

== Methods comparison

The following table shows all supported methods for importing data into Neo4j:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plot grows thicker.. In Desktop2, "Import" is available, but only for CSV. It is built-in so it's not standalone Data Importer...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean the Open folder > Import option?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, Desktop2 has Importer built in, just like the Aura console, but it only supports csv files.

lidiazuin and others added 2 commits June 4, 2025 11:14
Co-authored-by: Jessica Wright <49636617+AlexicaWright@users.noreply.github.com>
Copy link
Contributor

@AlexicaWright AlexicaWright left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more comments ;)

====
For a more hands-on option, see the available link:https://graphacademy.neo4j.com/categories/?search=import[GraphAcademy courses] on data import.
====

== Methods comparison

The following table shows all supported methods for importing data into Neo4j:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, Desktop2 has Importer built in, just like the Aura console, but it only supports csv files.

Comment on lines +18 to +19
Before loading the file, you need to first create an link:https://neo4j.com/product/auradb/[Aura instance] or choose a link:{docs-home}/deployment-options[deployment of your choice].
Then, you can load the file using `LOAD CSV` using the following command:
Copy link
Contributor

@AlexicaWright AlexicaWright Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Before loading the file, you need to first create an link:https://neo4j.com/product/auradb/[Aura instance] or choose a link:{docs-home}/deployment-options[deployment of your choice].
Then, you can load the file using `LOAD CSV` using the following command:
The `LOAD CSV` command can be used to load data into any deployment of Neo4j, whether it is an link:https://neo4j.com/product/auradb/[Aura instance] or a local installation.
See link:{docs-home}/deployment-options[deployment options] for information.
The command looks like this:

[source,cypher]
--
LOAD CSV [WITH HEADERS] FROM url [AS alias] [FIELDTERMINATOR char]
--
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
--
--
If you include the optional `WITH HEADERS`, the first line of the CSV file is treated as a header and each row is treated as a map of key-value pairs rather than a list of values.
`FROM` lets you specify the location whether it is local or over the internet and it cannot be omitted.
`AS alias` names each row for reference.
The default field terminator in CSV files is the comma, but others are supported and can be specified using the parsing option `FIELDTERMINATOR`.

This is just a suggestion, but since it is a tutorial about this command, I think it's worthwhile to break down the basic command and inform what each part does.

//Example 2 - file placed in subdirectory within import directory (import/northwind/customers.csv)
LOAD CSV FROM "file:///northwind/customers.csv"
----
This is the content of the example `people.csv` file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use the result of running that command instead?

MERGE (c:Company {companyId: row.companyId})
MERGE (e)-[r:WORKS_FOR]->(c)
----
Note that the `FIELDTERMINATOR` wasn’t specified in the `LOAD CSV` clause because the default value is a comma.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you explain it on first mention, you can omit this. I added a suggestion for that.


The `neo4j-admin database import` command can be used for the initial graph population only.
. Search for typos in the data and in the queries.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like something to do once you know something is inaccurate?

* Type conversion is possible by suffixing the name with indicators like `:INT`, `:BOOLEAN`, etc.

For more details on this header format and the tool, see the section in the link:https://neo4j.com/docs/operations-manual/current/tools/neo4j-admin/neo4j-admin-import/[Neo4j Operations Manual -> Neo4j Admin import^] and the accompanying link:https://neo4j.com/docs/operations-manual/current/tutorial/neo4j-admin-import/[tutorial^].
== Model your data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is very confusing. I suggest to delete it and link to the chapter on data modeling instead.

Co-authored-by: Jessica Wright <49636617+AlexicaWright@users.noreply.github.com>
@neo4j-docops-agent
Copy link
Collaborator

This PR includes documentation updates
View the updated docs at https://neo4j-docs-getting-started-470.surge.sh

New pages:

Updated pages:

Copy link
Contributor

@AlexicaWright AlexicaWright left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second part reviewed! Looking great Lidia!!


=== Field terminator

Also known as delimiter, a field terminator is a character used to separate each field in a CSV file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought, but how would the LOAD CSV command work with more than one field terminator?

--

Here, the movie and person data (including the IDs) is repeated in different rows every time new information about a particular actor's role is featured.
This sort of duplication compromises the structure of the data when in a graph.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This sort of duplication compromises the structure of the data when in a graph.
This sort of duplication compromises the graph data structure.


== File location

When working with a CSV file in Neo4j, you can access it from a link:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When working with a CSV file in Neo4j, you can access it from a link:
When using the `LOAD CSV` command to load your CSV data, the CSV file is accessed via URL, either over the internet:

RETURN row
--

Or from a local folder, if you use an on-premise deployment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this doesn't work in Aura? Maybe test to see?

If you want to open your CSV file from another location, you need to change the link:https://neo4j.com/docs/operations-manual/2025.03/configuration/configuration-settings/#config_server.directories.import[`server.directories.import`] settings.

[IMPORTANT]
====
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very long admonition. Could it be shortened or rewritten as a regular paragraph (i.e. not an admonition)?

To avoid this problem:
+
* Check if headers match the data in the file.
* Adjust formatting, columns, etc _before_ you import for a smooth process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Adjust formatting, columns, etc _before_ you import for a smooth process.
It is recommended to adjust formatting, columns, etc _before_ you import for a smooth process.

. *Inconsistent line breaks*
+
Ensure line breaks are consistent throughout the file.
The recommendation is to use the Unix style for compatibility, in case you are using Linux.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The recommendation is to use the Unix style for compatibility, in case you are using Linux.
For Linux users, the recommendation is to use the Unix style for compatibility.


* `*toInteger()*`: converts a value to an integer.
* `*toFloat()*`: converts a value to a float (e.g. for monetary amounts).
* `*datetime()*`: converts a value to a `DateTime`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, we should either use code block for all data types or none. I suggest to use it for all of them. So string, float etc.

----

In this case, you should separate node and relationship creation on a separate part of the processing.
For instance, instead of the following:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For instance, instead of the following:
For example, instead of the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants