-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: read_xml not support large file #45442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @wangrenz! This is a great use case. Large XML file support was in the works for both How large was your XML file? Can you post a reproducible example of its content (redact as needed)? |
Using the Wikipedia latest page article dump with a bzip2 at 2.7 GB which decompresses to 12.4 GB, I am unable to raise your However, The idea is users pass in an import pandas as pd
import xml.etree.ElementTree as ET
#import lxml.etree as ET
xml_file = "~/Downloads/enwikisource-latest-pages-articles.xml"
iterparse_items = {"page": ["title", "ns", "id"]}
data = []
node = next(iter(iterparse_items))
for event, elem in ET.iterparse(xml_file, events=('start', 'end')):
curr_elem = (
elem.tag.split('}')[1] if '}' in elem.tag else elem.tag
)
if event == 'start':
if curr_elem == node:
row = {}
for col in iterparse_items[node]:
if curr_elem == col:
row[col] = (
elem.text.strip()
if elem.text is not None
else elem.text
)
if col in elem.attrib:
row[col] = elem.attrib[col].strip()
if event == 'end':
if curr_elem == node:
data.append(row)
elem.clear()
df = pd.DataFrame(data)
print(df)
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns] |
Thank you @ParfaitG , My xml file as follow: https://raw.githubusercontent.com/wangrenz/temp_repo/master/202201150000.xml.bz2 The xml file size is about 60MB. |
What are you attempting to extract from this XML? What is its structure? Without passing a specific For pandas data frames, flat, shallow XML with repeated nodes is expected for its two-dimensional structure of rows by columns. See Notes section of pandas.read_xml. If needed, you can use XSLT via lxml to flatten. |
Closed by #45724 |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
lxml.etree.XMLSyntaxError: internal error: Huge input lookup, line 61504, column 702
Expected Behavior
read_xml nead add
huge_tree=True
inpandas/io/xml.py
for example:
Old
parse_data
:New:
Old
_parse_doc
:New:
Installed Versions
INSTALLED VERSIONS
commit : f00ed8f
python : 3.8.10.final.0
python-bits : 64
OS : Linux
OS-release : 3.10.0-1127.13.1.el7.x86_64
Version : #1 SMP Tue Jun 23 15:46:38 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.0
numpy : 1.20.3
pytz : 2021.1
dateutil : 2.8.2
pip : 21.1.3
setuptools : 52.0.0.post20210125
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.6.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.1
IPython : 7.22.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.2
fsspec : 2021.07.0
fastparquet : None
gcsfs : None
matplotlib : 3.3.4
numexpr : 2.7.3
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.7.0
sqlalchemy : None
tables : None
tabulate : None
xarray : 0.19.0
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: