Skip to content

Sqlachemy raises when writing write timedelta64 columns to sqlite. #6921

Closed
@danielballan

Description

@danielballan

Maybe we can work around this. At least it should raise informatively.

In [24]: df = pd.to_timedelta(Series(['00:00:01', '00:00:03'], name='foo')).to_frame()

In [25]: df
Out[25]: 
       foo
0 00:00:01 
1 00:00:03

[2 rows x 1 columns]

In [26]: df.dtypes
Out[26]: 
foo    timedelta64[ns]
dtype: object

In [27]: df.to_sql('bar', engine)
(...)
StatementError: unsupported operand type(s) for +: 'datetime.datetime' and 'long' (original cause: TypeError: unsupported operand type(s) for +: 'datetime.datetime' and 'long') 'INSERT INTO bar (foo) VALUES (?)' [{'foo': 1000000000L}, {'foo': 3000000000L}]

The full traceback is in this gist.

Using a sqlite3.Connection (legacy-style) allows you to write and read.

In [32]: conn = sqlite3.connect(':memory:')

In [33]: df.to_sql('bar', conn)

In [37]: pd.read_sql('SELECT * FROM bar', conn, flavor='sqlite')
Out[37]: 
   index         foo
0      0  1000000000
1      1  3000000000

The data comes back as int64 type. There is no clean way around this, no obvious way to tell that these sqlite3 integers are actually timedeltas. We could store timedeltas as strings, like datetimes. But I'm not necessarily in favor of that approach.

Metadata

Metadata

Assignees

No one assigned

    Labels

    IO SQLto_sql, read_sql, read_sql_queryTimedeltaTimedelta data type

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions