Data & Engineering
LLM API Gateway with Observability
Built a set of dedicated AWS Lambda functions, one per LLM provider (Gemini and OpenAI), each acting as a clean API wrapper around that provider's SDK. Any frontend or backend can freely switch…
The challenge
Why it exists
Teams integrating LLMs needed to manage separate SDKs and connection logic for each provider. There was also no visibility into how those LLM calls were performing across the different integrations.
The approach
How it works
Built a set of dedicated AWS Lambda functions, one per LLM provider (Gemini and OpenAI), each acting as a clean API wrapper around that provider's SDK. Any frontend or backend can freely switch between models simply by calling the relevant Lambda Fn URL, without needing to manage provider-specific SDKs or credentials directly. After the initial build, I used LLM-assisted code generation (before having access to Claude or Cursor) to plan and implement LangFuse observability into the Lambda functions adding prompt tracing, latency monitoring, and usage logging. This project was also my first real experiment with using LLMs to generate and integrate code into an existing working codebase, giving me hands-on experience with prompt engineering for code tasks and iterating with AI on real deployments.
Key capabilities
What it does
Built a set of dedicated AWS Lambda functions, one per LLM provider (Gemini and OpenAI), each acting as a clean API wrapper around that provider's SDK.
Any frontend or backend can freely switch between models simply by calling the relevant Lambda Fn URL, without needing to manage provider-specific SDKs or credentials directly.
After the initial build, I used LLM-assisted code generation (before having access to Claude or Cursor) to plan and implement LangFuse observability into the Lambda functions adding prompt tracing, latency monitoring, and usage logging.
This project was also my first real experiment with using LLMs to generate and integrate code into an existing working codebase, giving me hands-on experience with prompt engineering for code tasks and iterating with AI on real deployments.
Typically used by
Engineering / AI teams
Business impact
Reduces integration effort when switching or comparing LLM providers. Any team can plug into one endpoint instead of managing multiple SDKs. LangFuse observability enables cost tracking and debugging, which is critical before scaling LLM usage in production.
Built with
Technology
Tools & Frameworks
Integrations
More in Data & Engineering
Related applications
Data & Engineering
Database Field Mapping & Discovery Tool
Connects to databases Extracts database metadata (schemas, tables, columns, relationships) Generates data dictionaries and ER diagrams Integrates with the Gemini API to enable conversational…
ViewData & Engineering
SQL Standardization & Optimization Bot
To solve all of the above issue I have created SQL Standard Optimization AI bot. This helps to maintain same standards accross whole team.
ViewData & Engineering
Conversational Data Profiler
WebApp powered by a smart AI bot that enables Exploratory Data Analysis (EDA) without requiring Python or traditional BI tools. The application accepts multiple data formats, including Excel, CSV,…
ViewWant something like this for your team?
We'll map your workflow and scope a working prototype — typically in three weeks, not three months.
Talk to us