We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The example in the README.md and the documentation all print each row individually.
README.md
In practice many users will try to convert the data to a pandas dataframe. Why not include that example in the getting started. We currently use:
import pandas as pd cursor.execute(query) result = cursor.fetchall() df = pd.DataFrame(result, columns=[x[0] for x in cursor.description])
The text was updated successfully, but these errors were encountered:
Writing it in this issue since it's kinda related, would it make sense to have an actual option to get the result set as pandas dataframe?
I'm not seeing something important, perhaps, but this code is confusing: https://github.com/databricks/databricks-sql-python/blob/main/src/databricks/sql/client.py#L657 Why not have an option to return the actual dataframe, why is it converted back and forth?
Sorry, something went wrong.
I guess the same would be helpful for writes. I was just wondering what the best way of writing a pandas table to Databricks would be.
No branches or pull requests
The example in the
README.md
and the documentation all print each row individually.In practice many users will try to convert the data to a pandas dataframe. Why not include that example in the getting started. We currently use:
The text was updated successfully, but these errors were encountered: