AI Gateway Microservice Tech documenation
AI Gateway’s purpose
The AI Gateway microservice serves as a centralized abstraction layer for integrating AI-powered conversational capabilities into client applications. It is designed to provide configurable access to one or more AI providers based on credentials and configuration specific to each client instance.
This microservice currently supports chat-based interactions and is built with extensibility in mind to accommodate future AI functionalities such as embeddings or image generation.
Key features
Multi-provider support with per-client credential isolation
Unified API interface for initiating and managing AI chat sessions
Increased performance by leveraging the Spring features to use and instantiate only the providers for which a configuration has been provided
API interface for checking on the available AI providers based on the client instance provided configuration
Available providers
Anthropic
OpenAI
DeepSeek
PSChat
Bodhi
Microservice design
The AI Gateway microservice is built with a modular, provider-agnostic architecture that dynamically initializes AI provider services based on the presence of a client-specific configuration. This design ensures that only the necessary components for the configured providers are loaded, optimizing resource usage and enabling multi-provider support with clean separation of concerns.
Dynamic Bean Creation
For each supported AI provider, the microservice conditionally creates the following components when a valid configuration is detected for a client instance:
A provider-specific configuration bean
A provider-specific service bean implementing the common
AIProviderServiceinterface
These beans are only instantiated if the corresponding credentials or configuration values are present in the runtime environment. The main credential that conditions the creation of any AI provider bean is the presence of the api-key
Example configuration
AI:
providers:
openAI:
api-key: ${OPENAI_API_KEY}With this example the OpenAI configuration and service beans will be instantiated
AIServiceRegistry
All initialized AIProviderService service beans (if any) are injected into a central component called AIServiceRegistry. The registry performs the following functions:
Accepts a list of
AIProviderServiceimplementations during application startupStores each provider in an internal
Map<String, AIProviderService>where the key is the unique identifier for the AI provider (e.g.,“openAI",“anthropic")Exposes routing logic that allows consumers of the microservice to invoke provider-specific functionality based on the requested provider key
Creating a new AI provider
To support a new AI provider, follow the steps below:
Define Configuration Class
Requirements:
To achieve a better performance, the configuration class and its related beans should be dependent on the presence of the AI provider configuration (at least on api-key)
If the configuration is provided, then the new AI provider client should be created as a bean
@Configuration
@ConditionalOnProperty(value = "AI.providers.<new-AI-provider>.api-key")
@ConditionalOnExpression("'${AI.providers.<new-AI-provider>.api-key:null}' != 'null'")
@ConfigurationProperties("AI.providers.<new-AI-provider>")
@Data
public class NewAIProviderConfig {
private String apiKey;
//other AI provider related configuration properties
//Create the beans which allow the connectivity to the new AI provider
}Define a Service Class
Requirements:
The created service class must implement the
AIProviderServiceinterface so it can be used in the AIServiceRegistry routerTo achieve a better performance, the service class should be dependent on the presence of the AI provider configuration (at least on api-key)
@Service
@ConditionalOnProperty(value = "AI.providers.<new-AI-provider>.api-key")
@ConditionalOnExpression("'${AI.providers.<new-AI-provider>.api-key:null}' != 'null'")
@Slf4j
@AllArgsConstructor
public class NewAIProviderService implements AIProviderService {
private final <new-AI-provider-client> newAIProviderClient; //the AI provider client constructed in the configuration class at step 1
@Override
public AIChatGenerationResponse handleChatGenerationRequest(AIChatGenerationRequest AIChatGenerationRequest) {
// implementation to make the requests
}
@Override
public String getAIProviderServiceName() {
return "<new-AI-provider-name>"; //AI provider identifier
}
@Override
public AIProvider getAIProviderDetAIls() {
// implementation to retrieve the new AI provider detAIls
}
}Update the
application.ymlstructure
Requirements:
Make sure that a
nullvalue will be used if the client instance doesn’t contain the new AI provider credentialsThe new AI provider credentials should be passed as environment variables
AI:
providers:
<new-AI-provider>:
api-key: ${NEW_AI_PROVIDER_API_KEY:null}Write integration tests
Requirements
The integration test written for the new AI provider should also be dependent on the presence of the required configuration through an environment variable
Any integration test written should extend the
AbstractIntegrationTestclass
@TestPropertySource(properties = {
"AI.providers.<new-AI-provider>.api-key=${NEW_AI_PROVIDER_API_KEY:null}"
})
@EnabledIf(value = "#{environment.getProperty('AI.providers.<new-AI-provider>.api-key') != 'null'}", loadContext = true)
class NewAIProviderServiceIntegrationTest extends AbstractIntegrationTest {
//your tests
}Update the current documentation with the changes
Implemented examples
Providers supported by Spring AI
Take a look at OpenAI or Anthropic AI providers' implementation (e.g.
OpenAIConfig,OpenAIService,OpenAIServiceIntegrationTestclasses)
Custom AI providers
Take a look at the custom AI provider documentation (e.g.
CustomAIConfig,CustomAIService,CustomAIServiceIntegrationTestclasses)
Authorization mechanism
Overview
All incoming requests must be authenticated using a Bearer token (JWT). The microservice validates this token through a series of strict checks before processing the request.
JWT Validation Filters
The following validation steps are performed for every incoming request:
Presence of Authorization Header
The HTTP request must contain a header in the format:
Authorization: Bearer <JWT>
Mandatory JWT Claims
The token must include the following standard claims:sub(Subject) – Identifies the authenticated entity.aud(Audience) – Must match the service identifier ofAI-gateway.exp(Expiration) – Must be a valid future timestamp.iss(Issuer) – Must match one of the trusted token issuers.
Issuer Verification
The
issClaim must exactly match one of the trusted issuers configured for theAI-gateway.Tokens from unknown issuers are rejected.
Audience Verification
The
audclaim must be equal to the expected service ID of theAI-gateway.This ensures the token was issued specifically for this microservice.
Expiration Validation
The
expclaim must indicate a valid future timestamp.
JWT decoder secret
Currently, the microservice uses the same secret as the one used for encoding the central auth JWTs
The same secret must be used for issuing the JWT tokens from the caller service
JWT Configuration
auth:
secret: ${AUTH_SECRET}
trusted-issuers: ${TRUSTED_ISSUERS}
audience: ${KNOWHOW_AI_GATEWAY_SERVICE_ID}secretsecret used for parsing the received JWT
The microservice that issues the JWT must use the same value for signing
trusted-issuersComma separated microservice IDs that are allowed to call the AI-gateway
audienceThe AI-gateway microservice id
This is used to validate that the JWT was intended to be used for this service
Microservice IDs naming convention
The proposed naming convention is
<platform_name>.<component>.<sub-component>
Example:
knowhow.AI-gateway.api
Local setup
Clone the GitHub repository
Set the following environment variables
JWT configuration variables
AUTH_SECRETTRUSTED_ISSUERSKNOWHOW_AI_GATEWAY_SERVICE_ID
One or multiple AI provider configuration
ANTHROPIC_API_KEY
or
OPENAI_API_KEY
or
both of the above
Run the application
The application will start on port 7001
Running the integration tests
Create an
.env.integration-testfile undersrc/test/resources/config/pathProvide your preferred AI provider / providers configuration in a similar way in which was presented in the Local Setup guide
Example .env.integration-test
OPENAI_API_KEY=<your_api_key>With this file, only the OpenAI-related integration tests will be run. The same can be provided for Anthropic.
© 2022 Publicis Sapient. All rights reserved.