GoSQL is a high-performance database connector library written in Go and designed for Python applications. It provides unified database connectivity for MySQL, PostgreSQL, and Microsoft SQL Server with 2-3x better performance than native Python connectors while maintaining 100% API compatibility.
- π₯ High Performance: 2-3x faster than native Python connectors
- π― Drop-in Replacement: 100% API compatibility with popular Python database connectors
- π Multi-Database Support: MySQL, PostgreSQL, and Microsoft SQL Server
- β‘ Optimized Connection Pooling: Efficient connection management and reuse
- π Type Safety: Robust type conversion between Go and Python
- π Built-in Monitoring: Performance metrics and monitoring capabilities
- π Batch Operations: Optimized bulk operations for large datasets
- π‘οΈ Memory Efficient: 3x lower memory footprint per connection
Our comprehensive benchmarks show significant performance improvements across all operations:
Operation | GoSQL | mysql-connector-python | psycopg2 | pyodbc | Performance Gain |
---|---|---|---|---|---|
Connection Setup | 1.2ms | 3.8ms | 3.5ms | 4.1ms | 3.2x faster |
Simple Query | 0.8ms | 2.5ms | 2.3ms | 2.7ms | 3.1x faster |
Parameterized Query | 1.1ms | 3.2ms | 2.9ms | 3.4ms | 2.9x faster |
Large Result Fetch (100K rows) | 420ms | 950ms | 870ms | 1020ms | 2.2x faster |
Batch Insert (1K records) | 45ms | 125ms | 98ms | 156ms | 2.8x faster |
Transaction Commit | 1.5ms | 4.0ms | 3.7ms | 4.3ms | 2.7x faster |
Memory per Connection | 12KB | 35KB | 32KB | 38KB | 3x lower |
Benchmarks performed on AWS c5.4xlarge instance with dedicated RDS instances
- Python 3.7+
- Operating System: Linux, macOS, or Windows
pip install gosql-connector
git clone https://github.com/coffeecms/gosql.git
cd gosql/pythonpackaging
pip install -e .
Before (mysql-connector-python):
import mysql.connector
# Original mysql-connector-python code
conn = mysql.connector.connect(
host="localhost",
user="root",
password="secret",
database="ecommerce"
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM products WHERE price > %s", (100,))
products = cursor.fetchall()
for product in products:
print(f"Product: {product[1]}, Price: ${product[3]}")
cursor.close()
conn.close()
After (GoSQL - just change the import!):
from gosql.mysql import connector
# Same exact code, just different import - 3x faster performance!
conn = connector.connect(
host="localhost",
user="root",
password="secret",
database="ecommerce"
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM products WHERE price > %s", (100,))
products = cursor.fetchall()
for product in products:
print(f"Product: {product[1]}, Price: ${product[3]}")
cursor.close()
conn.close()
from gosql.postgres import connect
# PostgreSQL connection with automatic resource management
with connect(
host="localhost",
user="postgres",
password="secret",
database="analytics"
) as conn:
with conn.cursor() as cursor:
# Complex analytical query
cursor.execute("""
SELECT
date_trunc('month', created_at) as month,
COUNT(*) as orders,
SUM(total_amount) as revenue
FROM orders
WHERE created_at >= %s
GROUP BY month
ORDER BY month DESC
""", ('2024-01-01',))
results = cursor.fetchall()
print("Monthly Revenue Report:")
for month, orders, revenue in results:
print(f"{month.strftime('%Y-%m')}: {orders:,} orders, ${revenue:,.2f}")
from gosql.mysql import connect
import time
# Demonstrate high-performance batch insert
def bulk_insert_users(user_data):
with connect(
host="localhost",
user="root",
password="secret",
database="userdb"
) as conn:
cursor = conn.cursor()
# Batch insert - much faster than individual inserts
start_time = time.time()
cursor.executemany(
"INSERT INTO users (name, email, age, city) VALUES (%s, %s, %s, %s)",
user_data
)
conn.commit()
end_time = time.time()
print(f"Inserted {len(user_data)} records in {end_time - start_time:.2f}s")
print(f"Throughput: {len(user_data) / (end_time - start_time):.0f} records/sec")
# Sample data
users = [
("Alice Johnson", "alice@email.com", 28, "New York"),
("Bob Smith", "bob@email.com", 34, "Los Angeles"),
("Carol Davis", "carol@email.com", 25, "Chicago"),
# ... thousands more records
] * 1000 # 3000 records total
bulk_insert_users(users)
from gosql.mssql import connect
def transfer_funds(from_account, to_account, amount):
"""Demonstrate ACID transaction with automatic rollback on error"""
with connect(
server="localhost",
user="sa",
password="Password123!",
database="banking"
) as conn:
try:
with conn.begin() as transaction:
cursor = transaction.cursor()
# Check source account balance
cursor.execute(
"SELECT balance FROM accounts WHERE account_id = ?",
(from_account,)
)
balance = cursor.fetchone()[0]
if balance < amount:
raise ValueError("Insufficient funds")
# Debit source account
cursor.execute(
"UPDATE accounts SET balance = balance - ? WHERE account_id = ?",
(amount, from_account)
)
# Credit destination account
cursor.execute(
"UPDATE accounts SET balance = balance + ? WHERE account_id = ?",
(amount, to_account)
)
# Log transaction
cursor.execute(
"INSERT INTO transactions (from_account, to_account, amount, timestamp) VALUES (?, ?, ?, GETDATE())",
(from_account, to_account, amount)
)
# Transaction automatically commits here
print(f"Successfully transferred ${amount} from {from_account} to {to_account}")
except Exception as e:
# Transaction automatically rolls back
print(f"Transfer failed: {e}")
raise
# Usage
transfer_funds("ACC001", "ACC002", 500.00)
from gosql.postgres import connect
from gosql.core import performance_monitor
import time
def analyze_user_behavior():
"""Demonstrate advanced PostgreSQL features and performance monitoring"""
# Reset performance monitor
performance_monitor.reset()
with connect(
host="localhost",
user="postgres",
password="secret",
database="analytics",
sslmode="prefer"
) as conn:
cursor = conn.cursor()
# Complex query with window functions
start_time = time.time()
cursor.execute("""
WITH user_sessions AS (
SELECT
user_id,
session_start,
session_end,
EXTRACT(EPOCH FROM (session_end - session_start)) as duration_seconds,
ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY session_start) as session_rank
FROM user_sessions
WHERE session_start >= %s
),
user_stats AS (
SELECT
user_id,
COUNT(*) as total_sessions,
AVG(duration_seconds) as avg_session_duration,
MAX(duration_seconds) as max_session_duration,
SUM(duration_seconds) as total_time_spent
FROM user_sessions
GROUP BY user_id
)
SELECT
u.username,
us.total_sessions,
ROUND(us.avg_session_duration::numeric, 2) as avg_duration,
ROUND(us.total_time_spent::numeric / 3600, 2) as total_hours
FROM user_stats us
JOIN users u ON u.id = us.user_id
WHERE us.total_sessions >= 5
ORDER BY us.total_time_spent DESC
LIMIT 50
""", ('2024-01-01',))
results = cursor.fetchall()
query_time = time.time() - start_time
print("Top Active Users Analysis:")
print("-" * 60)
for username, sessions, avg_duration, total_hours in results:
print(f"{username:20} | {sessions:3d} sessions | {avg_duration:6.1f}s avg | {total_hours:6.1f}h total")
# Demonstrate COPY command for bulk data loading
print("\nBulk loading data using PostgreSQL COPY...")
start_time = time.time()
# Create temporary table
cursor.execute("""
CREATE TEMP TABLE temp_events (
user_id INTEGER,
event_type VARCHAR(50),
timestamp TIMESTAMP,
metadata JSONB
)
""")
# Simulate bulk copy operation
# In real usage, you would use cursor.copy_from() with a file
bulk_data = [
(1001, 'page_view', '2024-07-01 10:00:00', '{"page": "/home"}'),
(1002, 'click', '2024-07-01 10:01:00', '{"element": "button"}'),
# ... thousands more records
] * 1000
cursor.executemany(
"INSERT INTO temp_events VALUES (%s, %s, %s, %s)",
bulk_data
)
load_time = time.time() - start_time
print(f"Loaded {len(bulk_data)} records in {load_time:.2f}s")
print(f"Throughput: {len(bulk_data) / load_time:.0f} records/sec")
conn.commit()
# Show performance statistics
stats = performance_monitor.get_stats()
print("\nPerformance Statistics:")
print(f"Total query time: {stats['total_query_time']:.3f}s")
print(f"Average query time: {stats['average_query_time']*1000:.2f}ms")
print(f"Queries per second: {stats['queries_per_second']:.0f}")
# Run the analysis
analyze_user_behavior()
from gosql.mysql import connect
# Configure connection pool for high-traffic applications
conn = connect(
host="db.example.com",
user="app_user",
password="secure_password",
database="production",
pool_size=50, # Maximum 50 connections
max_lifetime=3600, # Connections expire after 1 hour
max_idle_time=300, # Close idle connections after 5 minutes
charset="utf8mb4"
)
from gosql.postgres import connect
# Secure connection with SSL
conn = connect(
host="secure-db.example.com",
user="secure_user",
password="secure_password",
database="sensitive_data",
sslmode="require", # Require SSL
sslcert="/path/to/client-cert.pem",
sslkey="/path/to/client-key.pem",
sslrootcert="/path/to/ca-cert.pem"
)
Run your own performance benchmarks:
from gosql.benchmarks import BenchmarkRunner
# Configure database connections
mysql_config = {
'host': 'localhost',
'user': 'root',
'password': 'password',
'database': 'test'
}
# Run comprehensive benchmarks
runner = BenchmarkRunner()
runner.run_mysql_benchmarks(mysql_config, iterations=1000)
runner.print_summary()
runner.generate_report("benchmark_results.json")
Sample benchmark output:
BENCHMARK SUMMARY
================================================================================
SIMPLE_QUERY OPERATION:
----------------------------------------
GoSQL MySQL | Avg: 0.85ms | P95: 1.2ms | QPS: 1176 | Mem: 12.1MB (3.1x faster)
mysql-connector-python | Avg: 2.6ms | P95: 4.8ms | QPS: 385 | Mem: 34.7MB
BATCH_INSERT OPERATION:
----------------------------------------
GoSQL MySQL | Avg: 45.2ms | P95: 67ms | QPS: 22 | Mem: 15.3MB (2.8x faster)
mysql-connector-python | Avg: 127ms | P95: 189ms | QPS: 8 | Mem: 42.1MB
GoSQL leverages Go's superior performance characteristics while maintaining Python's ease of use:
βββββββββββββββββββββββββββ
β Python Application β
βββββββββββββββββββββββββββ€
β GoSQL Python API β β 100% compatible with native connectors
βββββββββββββββββββββββββββ€
β CGO Bridge β β High-performance Go-Python interface
βββββββββββββββββββββββββββ€
β Go Core Engine β β Optimized connection pooling & query execution
βββββββββββββββββββββββββββ€
β Database Drivers β β Native Go database drivers
β MySQL | PostgreSQL β
β | SQL Server β
βββββββββββββββββββββββββββ
- Connection Pooling: Intelligent connection reuse and lifecycle management
- Batch Processing: Optimized bulk operations with reduced round trips
- Memory Management: Zero-copy data transfer where possible
- Type Conversion: Pre-compiled type converters for common data types
- Query Optimization: Parameter placeholder conversion and query caching
# Before
import mysql.connector
conn = mysql.connector.connect(host="localhost", user="root", password="secret")
# After - just change the import!
from gosql.mysql import connector
conn = connector.connect(host="localhost", user="root", password="secret")
# Before
import psycopg2
conn = psycopg2.connect("host=localhost user=postgres password=secret")
# After
from gosql.postgres import connect
conn = connect(host="localhost", user="postgres", password="secret")
# Before
import pyodbc
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=test;UID=sa;PWD=secret')
# After
from gosql.mssql import connect
conn = connect(server="localhost", database="test", user="sa", password="secret")
Run the test suite:
# Install test dependencies
pip install pytest pytest-cov
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=gosql --cov-report=html
Use GoSQL in containerized environments:
FROM python:3.9-slim
# Install GoSQL
RUN pip install gosql-connector
COPY your_app.py /app/
WORKDIR /app
CMD ["python", "your_app.py"]
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repository
git clone https://github.com/coffeecms/gosql.git
cd gosql
# Install development dependencies
pip install -e ".[dev]"
# Run tests
make test
# Run benchmarks
make benchmark
This project is licensed under the MIT License - see the LICENSE file for details.
- π Documentation: https://gosql.readthedocs.io
- π Bug Reports: GitHub Issues
- π¬ Discussions: GitHub Discussions
- π§ Email: support@coffeecms.com
- v1.1: Support for SQLite and Oracle databases
- v1.2: Async/await support for asynchronous operations
- v1.3: Advanced query optimization and caching
- v1.4: GraphQL integration and ORM compatibility
- v2.0: Distributed query execution and sharding support
GoSQL - Bringing Go's performance to Python database operations! π
Made with β€οΈ by the CoffeeCMS team