@@ -9,21 +9,32 @@ Suppose you want to load all data from an existing BigQuery table
9
9
10
10
.. code-block :: python
11
11
12
- # Insert your BigQuery Project ID Here
13
- # Can be found in the Google web console
12
+ import pandas_gbq
13
+
14
+ # TODO : Set your BigQuery Project ID.
14
15
projectid = " xxxxxxxx"
15
16
16
- data_frame = read_gbq(' SELECT * FROM test_dataset.test_table' , projectid)
17
+ data_frame = pandas_gbq.read_gbq(
18
+ ' SELECT * FROM `test_dataset.test_table`' ,
19
+ project_id = projectid)
20
+
21
+ .. note ::
17
22
23
+ A project ID is sometimes optional if it can be inferred during
24
+ authentication, but it is required when authenticating with user
25
+ credentials. You can find your project ID in the `Google Cloud console
26
+ <https://console.cloud.google.com> `__.
18
27
19
28
You can define which column from BigQuery to use as an index in the
20
29
destination DataFrame as well as a preferred column order as follows:
21
30
22
31
.. code-block :: python
23
32
24
- data_frame = read_gbq(' SELECT * FROM test_dataset.test_table' ,
25
- index_col = ' index_column_name' ,
26
- col_order = [' col1' , ' col2' , ' col3' ], projectid)
33
+ data_frame = pandas_gbq.read_gbq(
34
+ ' SELECT * FROM `test_dataset.test_table`' ,
35
+ project_id = projectid,
36
+ index_col = ' index_column_name' ,
37
+ col_order = [' col1' , ' col2' , ' col3' ])
27
38
28
39
29
40
You can specify the query config as parameter to use additional options of
@@ -37,20 +48,39 @@ your job. For more information about query configuration parameters see `here
37
48
" useQueryCache" : False
38
49
}
39
50
}
40
- data_frame = read_gbq(' SELECT * FROM test_dataset.test_table' ,
41
- configuration = configuration, projectid)
51
+ data_frame = read_gbq(
52
+ ' SELECT * FROM `test_dataset.test_table`' ,
53
+ project_id = projectid,
54
+ configuration = configuration)
42
55
43
56
44
- .. note ::
57
+ The ``dialect `` argument can be used to indicate whether to use
58
+ BigQuery's ``'legacy' `` SQL or BigQuery's ``'standard' `` SQL (beta). The
59
+ default value is ``'standard' `` For more information on BigQuery's standard
60
+ SQL, see `BigQuery SQL Reference
61
+ <https://cloud.google.com/bigquery/docs/reference/standard-sql/> `__
45
62
46
- You can find your project id in the `Google developers console
47
- <https://console.developers.google.com> `__.
63
+ .. code-block :: python
48
64
65
+ data_frame = pandas_gbq.read_gbq(
66
+ ' SELECT * FROM [test_dataset.test_table]' ,
67
+ project_id = projectid,
68
+ dialect = ' legacy' )
49
69
50
- .. note ::
51
70
52
- The ``dialect `` argument can be used to indicate whether to use BigQuery's ``'legacy' `` SQL
53
- or BigQuery's ``'standard' `` SQL (beta). The default value is ``'legacy' ``, though this will change
54
- in a subsequent release to ``'standard' ``. For more information
55
- on BigQuery's standard SQL, see `BigQuery SQL Reference
56
- <https://cloud.google.com/bigquery/sql-reference/> `__
71
+ .. _reading-dtypes :
72
+
73
+ Inferring the DataFrame's dtypes
74
+ --------------------------------
75
+
76
+ The :func: `~pandas_gbq.read_gbq ` method infers the pandas dtype for each column, based on the BigQuery table schema.
77
+
78
+ ================== =========================
79
+ BigQuery Data Type dtype
80
+ ================== =========================
81
+ FLOAT float
82
+ TIMESTAMP DatetimeTZDtype(unit='ns', tz='UTC')
83
+ DATETIME datetime64[ns]
84
+ TIME datetime64[ns]
85
+ DATE datetime64[ns]
86
+ ================== =========================
0 commit comments