This repository contains additional services required by the DataClouder template. For reference implementations, check out our other templates:
git clone https://github.com/dataclouder-dev/dataclouder-template-python [your-project-name]
or use the button on github right top corner CREATE TEMPLATE
- Python >= 3.11
- Make >= 3.0.0 (Optional but highly recommended)
- Poetry >= 2.0.0 (Optional but recommended)
- Docker (Optional)
- Google Cloud credentials and environment variables
- MongoDB credentials
-
.env file is required, you need to create it but can copy and paste from .env.example, then set the variables
-
Google service account file is required and placed it in the
./.cred
folder at the project root
check documentation how to create service account here
You should be ready to go
Requires Poetry and Docker to be installed.
make install # Only the first time
# Single command setup (Work in Progress)
make start
# Create virtual environment
python3 -m venv .venv
# Activate virtual environment
# For Unix/MacOS:
source .venv/bin/activate
# For Windows:
.venv\Scripts\activate
# Install required packages
pip install -r requirements.txt
You'll need to obtain the following from the Polilan development team:
- Google Cloud credential file (place in
/.cred
folder) - Environment variables template (
.env
file)
# Option 1: Using uvicorn
uvicorn app.main:app --reload
# Option 2: Using FastAPI development server
fastapi dev app/main.py
# Option 3) Recommended
make start
Once running, access the API documentation at: http://127.0.0.1:8000/docs
Environment | URL |
---|---|
QA | https://..... |
Production | https://..... |
-
Set environment variables:
- Ensure the
.env
file is present in the project root
- Ensure the
-
Build Docker image:
make gcp-build
-
Deploy to Google Cloud Run:
make gcp-deploy
Change project name in Makefile
.
then just run
make deploy
Note: if you want to automate multiple environment remember that makefile just replace if the variables in the .env dont exist so adding this variables in .env will have priority. you can add per every environment
Note: Before setting up automated deployment, we recommend performing one manual deployment to verify everything works correctly. Initial deployments require setting up Cloud Run service variables, while subsequent deployments do not. Also note that manual deployments use the default GCR repository for artifacts, while automated deployments use a custom repository.
Steps:
- Fork the repository
- Go to Cloud Build and create a new trigger
- Grant GitHub access, select the repository, and accept conditions
- Configure trigger settings according to your needs
- Optional: Add permissions to the service account (Logs Writer, Cloud Run Admin, or default logs only)
- Add the repository in Artifact Registry (recommended: add policies to remove old versions)
Poetry is the recommended package manager for this project. Here are some useful commands:
poetry add <package> # Add a new package
poetry remove <package> # Remove a package
poetry update <package> # Update a package
poetry install # Install all dependencies
poetry build # Build the project
poetry publish # Publish the package
poetry show # Check dependencies
You can create new project but, if you want to get updates from the template, you can run
make merge-upstream
# Build the image
docker build -t dc_python_server_image .
# Run the container
docker run -it -p 8080:8080 dc_python_server_image
We highly recommend using Ruff, a fast Python linter and formatter that replaces multiple tools like flake8. Settings are configured in the pyproject.toml
file.
Install the Ruff VSCode Extension
ruff check . # Check for issues
ruff check --fix . # Fix issues automatically
ruff format . # Format code
ruff check --fix --format . # Fix issues and format code
For more information about Ruff rules and configuration, visit the official documentation.