Running FastAPI in Production on a VPS: Step-by-Step Guide

Deploying FastAPI applications to production on a VPS requires careful configuration. This step-by-step guide will walk you through the entire process. Prerequisites A VPS with Ubuntu 20.04 or later Domain name (optional but recommended) Basic knowledge of Linux commands Step 1: Server Setup Update System sudo apt update sudo apt upgrade -y Install Python and Dependencies sudo apt install python3.9 python3-pip python3-venv nginx supervisor -y Step 2: Create Application Directory mkdir -p /var/www/myapp cd /var/www/myapp Create Virtual Environment python3 -m venv venv source venv/bin/activate Step 3: Deploy Your Application Install Dependencies pip install fastapi uvicorn[standard] gunicorn Create Application File # main.py from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"} @app.get("/health") def health_check(): return {"status": "healthy"} Step 4: Configure Gunicorn Create gunicorn_config.py: ...

December 9, 2025 · 4616 views

Django: What's New in 6.0

Django 6.0 was released today, starting another release cycle for the loved and long-lived Python web framework (now 20 years old!). It comes with a mosaic of new features, contributed to by many. Template Partials The Django Template Language now supports template partials, making it easier to encapsulate and reuse small named fragments within a template file. Partials are sections of a template marked by the new {% partialdef %} and {% endpartialdef %} tags. They can be reused within the same template or rendered in isolation. ...

December 9, 2025 · 3857 views

Python Asyncio Architecture: Event Loops, Tasks, and Futures Explained

Understanding Python’s asyncio architecture is crucial for writing efficient asynchronous code. Here’s a comprehensive guide. Event Loop The event loop is the core of asyncio. It manages and distributes the execution of different tasks. import asyncio async def main(): print("Hello") await asyncio.sleep(1) print("World") # Event loop runs the coroutine asyncio.run(main()) Coroutines Coroutines are functions defined with async def. They can be paused and resumed. async def fetch_data(): await asyncio.sleep(1) return "data" # Coroutine object coro = fetch_data() Tasks Tasks wrap coroutines and schedule them on the event loop. ...

July 20, 2022 · 4190 views

Building a Mini Blog with Python and Flask

Learn how to build a simple blog application using Python and Flask. Setup from flask import Flask, render_template, request, redirect, url_for app = Flask(__name__) # Simple in-memory storage posts = [] Routes @app.route('/') def index(): return render_template('index.html', posts=posts) @app.route('/post', methods=['GET', 'POST']) def create_post(): if request.method == 'POST': title = request.form['title'] content = request.form['content'] posts.append({'title': title, 'content': content}) return redirect(url_for('index')) return render_template('create_post.html') Templates <!-- index.html --> {% for post in posts %} <article> <h2>{{ post.title }}</h2> <p>{{ post.content }}</p> </article> {% endfor %} Running flask run Conclusion Build your first Flask blog application! 🚀

June 15, 2022 · 4276 views

Async SQLAlchemy: Best Practices for Async Python Applications

Using SQLAlchemy with async Python requires understanding async patterns. Here’s how. Setup from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession from sqlalchemy.orm import sessionmaker engine = create_async_engine("postgresql+asyncpg://user:pass@localhost/db") AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False) Async Models from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String Base = declarative_base() class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True) name = Column(String) Async Operations async def create_user(name: str): async with AsyncSessionLocal() as session: user = User(name=name) session.add(user) await session.commit() return user async def get_user(user_id: int): async with AsyncSessionLocal() as session: result = await session.get(User, user_id) return result Best Practices Use async context managers Commit transactions explicitly Handle exceptions properly Use connection pooling Close sessions correctly Conclusion Build efficient async database applications with SQLAlchemy! 🐍

March 20, 2022 · 4081 views

Pandas Joins: Complete Guide to Merging DataFrames

Pandas provides powerful tools for joining DataFrames. Here’s a comprehensive guide. Merge Types Inner Join import pandas as pd df1 = pd.DataFrame({'key': ['A', 'B'], 'value1': [1, 2]}) df2 = pd.DataFrame({'key': ['B', 'C'], 'value2': [3, 4]}) result = pd.merge(df1, df2, on='key', how='inner') Left Join result = pd.merge(df1, df2, on='key', how='left') Right Join result = pd.merge(df1, df2, on='key', how='right') Outer Join result = pd.merge(df1, df2, on='key', how='outer') Multiple Keys result = pd.merge(df1, df2, on=['key1', 'key2']) Best Practices Choose the right join type Handle missing values Use appropriate keys Check for duplicates Optimize for large datasets Conclusion Master Pandas joins for efficient data manipulation! 📊

January 20, 2022 · 4919 views

Migrating from Python 2 to Python 3: Complete Guide

Migrating from Python 2 to Python 3 requires careful planning. Here’s a step-by-step guide. Key Differences Print Statement # Python 2 print "Hello" # Python 3 print("Hello") Division # Python 2 5 / 2 # 2 # Python 3 5 / 2 # 2.5 5 // 2 # 2 Unicode # Python 2 s = u"Hello" # Python 3 s = "Hello" # Unicode by default Migration Tools # 2to3 tool 2to3 -w script.py # Modernize python-modernize script.py Best Practices Test thoroughly Update dependencies Use type hints Handle bytes/strings Update string formatting Conclusion Migrate to Python 3 for modern Python development! 🐍

December 15, 2021 · 3410 views

Apache Spark Optimization: Partitioning and Bucketing Guide

Optimizing Spark jobs is crucial for performance. Here’s how to use partitioning and bucketing effectively. Partitioning from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Optimization").getOrCreate() # Repartition df = df.repartition(10, "column_name") # Coalesce df = df.coalesce(5) Bucketing df.write \ .bucketBy(10, "bucket_column") \ .sortBy("sort_column") \ .saveAsTable("bucketed_table") Broadcast Joins from pyspark.sql.functions import broadcast result = large_df.join(broadcast(small_df), "key") Caching df.cache() df.persist() Best Practices Partition appropriately Use bucketing for joins Broadcast small tables Cache frequently used data Monitor performance Conclusion Optimize Spark jobs for better performance! ⚡

October 15, 2021 · 4106 views