Harness the Power of Python for Data-Driven Business Decisions
Senior Python data consultant in Switzerland specializing in end-to-end data solutions with pandas, NumPy, scikit-learn, and FastAPI. I build high-performance data pipelines, REST APIs, and analytical tools that turn raw data into actionable business intelligence. 8+ years delivering production-grade Python solutions for Swiss enterprises.
Deep expertise in pandas, NumPy, scikit-learn, and SciPy for advanced analytics
Production-ready API development with FastAPI, Flask, and Django REST
Scalable ETL and data pipeline architecture using Apache Airflow and Prefect
Statistical modeling and hypothesis testing for evidence-based decisions
Web scraping and automated data collection at enterprise scale
Clean, testable, PEP 8-compliant code with full CI/CD integration
Performance optimization — profiling, Cython, async, and multiprocessing
Seamless integration with cloud platforms (AWS, Azure, GCP)
Based in Switzerland with experience across banking, pharma, and logistics
Design and implement robust ETL/ELT pipelines with Apache Airflow, Prefect, or custom Python solutions. Ingest data from APIs, databases, files, and streaming sources into clean, analytics-ready datasets.
Build high-performance RESTful APIs with FastAPI or Flask to expose data products, ML models, and analytics to internal teams and external partners with authentication, rate limiting, and documentation.
Apply rigorous statistical methods — regression, time series, A/B testing, Bayesian analysis — using statsmodels, SciPy, and scikit-learn to extract insights and validate business hypotheses.
Develop reliable, scalable web scraping systems using Scrapy, Selenium, and BeautifulSoup. Automate data collection from websites, PDFs, and unstructured sources with built-in error handling and monitoring.
Automate repetitive business processes — report generation, data validation, file processing, email workflows — with production-grade Python scripts that save hours of manual work every week.
Process and transform large datasets efficiently using pandas, Dask, and PySpark. Handle data cleaning, deduplication, enrichment, and normalization for analytics and machine learning workloads.