diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..bd4886df2 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,48 @@ +# Contributing + +To contribute to this repository, fork it and send pull requests. + +## Set up your environment + +This project uses [Poetry](https://python-poetry.org/) for dependency management, tests, and linting. + +1. Clone this respository +2. Run `poetry install` + +### Unit Tests + +We use [Pytest](https://docs.pytest.org/en/7.1.x/) as our test runner. Invoke it with `poetry run pytest`, all other arguments are passed directly to `pytest`. + +#### All tests +```bash +poetry run pytest tests +``` + +#### Only a specific test file + +```bash +poetry run pytest tests/tests.py +``` + +#### Only a specific method + +```bash +poetry run pytest tests/tests.py::ClientTestSuite::test_closing_connection_closes_commands +``` + +### Code formatting + +This project uses [Black](https://pypi.org/project/black/). + +``` +poetry run black src +``` +## Pull Request Process + +1. Update the [CHANGELOG.md](README.md) or similar documentation with details of changes you wish to make, if applicable. +2. Add any appropriate tests. +3. Make your code or other changes. +4. Review guidelines such as + [How to write the perfect pull request][github-perfect-pr], thanks! + +[github-perfect-pr]: https://blog.github.com/2015-01-21-how-to-write-the-perfect-pull-request/ diff --git a/README.md b/README.md index f07233866..45251507f 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,19 @@ # Databricks SQL Connector for Python +[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/) +[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector) + The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/). This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time. You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com). -# Documentation +## Requirements + +Python 3.7 or above is required. + +## Documentation For the latest documentation, see @@ -23,9 +30,10 @@ Example usage: from databricks import sql connection = sql.connect( - server_hostname='', - http_path='', - access_token='') + server_hostname='********.databricks.com', + http_path='/sql/1.0/endpoints/****************', + access_token='dapi********************************') + cursor = connection.cursor() @@ -38,8 +46,17 @@ cursor.close() connection.close() ``` -Where: -- `` is the Databricks instance host name. -- `` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef), - or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123) -- `` is a HTTP Bearer access token, e.g. a Databricks Personal Access Token. +In the above example: +- `server-hostname` is the Databricks instance host name. +- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef), +or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123) +- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries + + +## Contributing + +See [CONTRIBUTING.md](CONTRIBUTING.md) + +## License + +[Apache License 2.0](LICENSE)