Skip to content

Commit ade32a2

Browse files
authored
DOC: Prepare for 0.5.0 release (#188)
* Update changelog with all changes since 0.4.1. * Address warnings when building the docs due to mis-formed links. * Removes some old references to streaming, which are no longer relevant.
1 parent e7007a8 commit ade32a2

File tree

4 files changed

+41
-37
lines changed

4 files changed

+41
-37
lines changed

docs/source/changelog.rst

+12-4
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,31 @@
11
Changelog
22
=========
33

4-
0.5.0 / TBD
5-
-----------
4+
0.5.0 / 2018-06-15
5+
------------------
66

77
- Project ID parameter is optional in ``read_gbq`` and ``to_gbq`` when it can
88
inferred from the environment. Note: you must still pass in a project ID when
99
using user-based authentication. (:issue:`103`)
10-
- Add location parameter to ``read_gbq`` and ``to_gbq`` so that pandas-gbq
11-
can work with datasets in the Tokyo region. (:issue:`177`)
1210
- Progress bar added for ``to_gbq``, through an optional library `tqdm` as
1311
dependency. (:issue:`162`)
12+
- Add location parameter to ``read_gbq`` and ``to_gbq`` so that pandas-gbq
13+
can work with datasets in the Tokyo region. (:issue:`177`)
14+
15+
Documentation
16+
~~~~~~~~~~~~~
1417

18+
- Add :doc:`authentication how-to guide <howto/authentication>`. (:issue:`183`)
19+
- Update :doc:`contributing` guide with new paths to tests. (:issue:`154`,
20+
:issue:`164`)
1521

1622
Internal changes
1723
~~~~~~~~~~~~~~~~
1824

1925
- Tests now use `nox` to run in multiple Python environments. (:issue:`52`)
2026
- Renamed internal modules. (:issue:`154`)
27+
- Refactored auth to an internal auth module. (:issue:`176`)
28+
- Add unit tests for ``get_credentials()``. (:issue:`184`)
2129

2230
0.4.1 / 2018-04-05
2331
------------------

docs/source/reading.rst

+8-6
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33
Reading Tables
44
==============
55

6-
Suppose you want to load all data from an existing BigQuery table : `test_dataset.test_table`
7-
into a DataFrame using the :func:`~read_gbq` function.
6+
Suppose you want to load all data from an existing BigQuery table
7+
``test_dataset.test_table`` into a DataFrame using the
8+
:func:`~pandas_gbq.read_gbq` function.
89

910
.. code-block:: python
1011
@@ -25,9 +26,9 @@ destination DataFrame as well as a preferred column order as follows:
2526
col_order=['col1', 'col2', 'col3'], projectid)
2627
2728
28-
You can specify the query config as parameter to use additional options of your job.
29-
For more information about query configuration parameters see
30-
`here <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__.
29+
You can specify the query config as parameter to use additional options of
30+
your job. For more information about query configuration parameters see `here
31+
<https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__.
3132

3233
.. code-block:: python
3334
@@ -42,7 +43,8 @@ For more information about query configuration parameters see
4243
4344
.. note::
4445

45-
You can find your project id in the `Google developers console <https://console.developers.google.com>`__.
46+
You can find your project id in the `Google developers console
47+
<https://console.developers.google.com>`__.
4648

4749

4850
.. note::

docs/source/writing.rst

+13-21
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,8 @@
33
Writing DataFrames
44
==================
55

6-
Assume we want to write a DataFrame ``df`` into a BigQuery table using :func:`~to_gbq`.
6+
Assume we want to write a DataFrame ``df`` into a BigQuery table using
7+
:func:`~pandas_gbq.to_gbq`.
78

89
.. ipython:: python
910
@@ -38,21 +39,10 @@ a ``TableCreationError`` if the destination table already exists.
3839
3940
.. note::
4041

41-
If the ``if_exists`` argument is set to ``'append'``, the destination dataframe will
42-
be written to the table using the defined table schema and column types. The
43-
dataframe must contain fields (matching name and type) currently in the destination table.
44-
If the ``if_exists`` argument is set to ``'replace'``, and the existing table has a
45-
different schema, a delay of 2 minutes will be forced to ensure that the new schema
46-
has propagated in the Google environment. See
47-
`Google BigQuery issue 191 <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__.
48-
49-
Writing large DataFrames can result in errors due to size limitations being exceeded.
50-
This can be avoided by setting the ``chunksize`` argument when calling :func:`~to_gbq`.
51-
For example, the following writes ``df`` to a BigQuery table in batches of 10000 rows at a time:
52-
53-
.. code-block:: python
54-
55-
to_gbq(df, 'my_dataset.my_table', projectid, chunksize=10000)
42+
If the ``if_exists`` argument is set to ``'append'``, the destination
43+
dataframe will be written to the table using the defined table schema and
44+
column types. The dataframe must contain fields (matching name and type)
45+
currently in the destination table.
5646

5747
.. note::
5848

@@ -66,8 +56,10 @@ For example, the following writes ``df`` to a BigQuery table in batches of 10000
6656

6757
.. note::
6858

69-
While BigQuery uses SQL-like syntax, it has some important differences from traditional
70-
databases both in functionality, API limitations (size and quantity of queries or uploads),
71-
and how Google charges for use of the service. You should refer to `Google BigQuery documentation <https://cloud.google.com/bigquery/what-is-bigquery>`__
72-
often as the service seems to be changing and evolving. BiqQuery is best for analyzing large
73-
sets of data quickly, but it is not a direct replacement for a transactional database.
59+
While BigQuery uses SQL-like syntax, it has some important differences
60+
from traditional databases both in functionality, API limitations (size
61+
and quantity of queries or uploads), and how Google charges for use of the
62+
service. You should refer to `Google BigQuery documentation
63+
<https://cloud.google.com/bigquery/docs>`__ often as the service is always
64+
evolving. BiqQuery is best for analyzing large sets of data quickly, but
65+
it is not a direct replacement for a transactional database.

pandas_gbq/gbq.py

+8-6
Original file line numberDiff line numberDiff line change
@@ -524,9 +524,10 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
524524
<https://cloud.google.com/bigquery/sql-reference/>`__
525525
location : str (optional)
526526
Location where the query job should run. See the `BigQuery locations
527-
<https://cloud.google.com/bigquery/docs/dataset-locations>
528-
documentation`__ for a list of available locations. The location must
529-
match that of any datasets used in the query.
527+
documentation
528+
<https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a
529+
list of available locations. The location must match that of any
530+
datasets used in the query.
530531
.. versionadded:: 0.5.0
531532
configuration : dict (optional)
532533
Query config parameters for job processing.
@@ -659,9 +660,10 @@ def to_gbq(dataframe, destination_table, project_id=None, chunksize=None,
659660
.. versionadded:: 0.3.1
660661
location : str (optional)
661662
Location where the load job should run. See the `BigQuery locations
662-
<https://cloud.google.com/bigquery/docs/dataset-locations>
663-
documentation`__ for a list of available locations. The location must
664-
match that of the target dataset.
663+
documentation
664+
<https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a
665+
list of available locations. The location must match that of the
666+
target dataset.
665667
.. versionadded:: 0.5.0
666668
progress_bar : boolean, True by default. It uses the library `tqdm` to show
667669
the progress bar for the upload, chunk by chunk.

0 commit comments

Comments
 (0)