The first step to integrating Coral Server with your application is having a proper deployment of it. Exactly how you manage & deploy it depends on your application, but we offer a docker image for Coral Server, and support orchestrating agents via Docker.
Docker (recommended)
Coral Server looks for an application.yaml file in the path provided to it via the CONFIG_PATH environment variable. When running from docker that path defaults to /config/.
For that reason - to be able to configure Coral Server, you should create a config folder & mount that to /config when running:
Copy
Ask AI
# create your config dir, that our application.yaml will live insidemkdir my-config# and mount it when runningdocker run -p 5555:5555 -v ./my-config:/config ghcr.io/coral-protocol/coral-server:latest
There is no need to restart the server if you are only changing application.yaml - it will hot reload your changes!
Our provided docker image does not contain a python runtime! This means you cannot run agents through the executable runtime - and must use Docker orchestration instead.
This is intended, as Docker orchestration is more stable, reproducible & portable - thus more production-ready.
In order to allow Coral Server to spin up containers while being inside a Docker container itself, we need to mount the host’s Docker socket into our container.
Since Docker behaves differently on each platform - how to do this varies:
Linux
Mounting /var/run/docker.sock should be enough to give the server the ability to spin up containers.
Copy
Ask AI
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
macOS
We recommend you install colima for a nicer experience. With colima, you can mount ~/.colima/docker.sock:
Copy
Ask AI
docker run -v "~/.colima/docker.sock:/var/run/docker.sock" ...
Without colima, you can mount ~/.docker/run/docker.sock to the container:
Copy
Ask AI
docker run -v "~/.docker/run/docker.sock:/var/run/docker.sock" ...
Note how we mount to /var/run/docker.sock inside the container, so we don’t need to set DOCKER_SOCKET
Windows
Mount //var/run/docker.sock:
Copy
Ask AI
docker run -v "//var/run/docker.sock://var/run/docker.sock" ...
If that doesn’t work, mount //./pipe/docker_engine, and point to it with the DOCKER_SOCKET environment variable:
Copy
Ask AI
docker run ` -v "//./pipe/docker_engine://./pipe/docker_engine" ` -e "DOCKER_SOCKET=npipe:////./pipe/docker_engine" ` ...
Java
Clone the repo and build the jar file:
Copy
Ask AI
git clone https://github.com/Coral-Protocol/coral-server.gitcd coral-server./gradlew build --no-daemon -x test# the resulting .jar will end up in build/libs/coral-server-[..].jar
Coral Server looks for an application.yaml file in the path provided to it via the CONFIG_PATH environment variable. When unset, that path defaults to ./src/main/resources/, to make it easy in development.
For production, we recommended you set CONFIG_PATH to somewhere more easily accessible (and not in the cloned repo folder):
Copy
Ask AI
# make a folder outside of the repo folder, that our application.yaml will live insidemkdir ../coral-config/# and point at that folder when running Coral Serverexport CONFIG_PATH=../coral-config/java -jar "<name of jar file>"
There is no need to restart the server if you are only changing application.yaml - it will hot reload your changes!
You should also consider writing a systemd service that runs your jar file for more reliable deployments.
In production, the only ways you should be interfacing with Coral Server & the agents inside - is through the create session endpoint (/sessions), and custom tools.
Creating sessions is how you define and orchestrate agent graphs. While Coral Studio provides a ready-made interface for doing it, when integrating Coral with your application, you’ll want to use Coral Server’s APIs directly.
Copy
Ask AI
curl \ -X POST \ -H "Content-Type: application/json" \ --url http://localhost:5555/sessions/ \ -d "[JSON BODY HERE]"
To create a session, send a POST request to /sessions, with a JSON body containing your session parameters. Here are some sample request bodies:
Copy
Ask AI
// Single agent session, with a custom prompt and an API key passed in.{ "applicationId": "...", "privacyKey": "...", "agentGraph": { "agents": { // each agent in a session is given a unique (for the session) name "my-agent": { "type": "local", // we want to run an agent from our local registry "agentType": "interface", // the name of the agent in the registry // agents in a registry have a set of options you can set "options": { "OPENAI_API_KEY": "..." }, "tools": [], // let us pass custom tools to individual agents "systemPrompt": "Speak like a pirate!" // optional field for adding custom prompts } }, // links define which agents can "see" each other "links": [ ["agent-1"] ], "tools": {} // define our custom tools (see below section) }}
The options (as defined in the registry), that we want to set. Each key is the name of an option we are setting, with the value being a string or number depending on the exposed option’s type.
What agents can interact with each other - defined as a list of ‘groups’ of agents, where each agent in a group can “see” every other agent in that same group.
For example:
Copy
Ask AI
{ "links": [ ["a", "b", "c"], ["c", "d"] ]}
Defines two groups, where agents a, b & c can all interact, and agents c & d can interact. Agents a & b however, cannot interact with agent d.
How this custom tool is actually executed, when the corresponding MCP tool is called by an agent.
Show HttpTransport schema
A tool that is called via a HTTP POST request to url, with session ID & agent ID appended - and the associated MCP tool call’s input passed in as the request body.
For example, if the url is http://localhost:1234/my-tool/ - when an agent with id foo from a session bar calls the tool, a POST request to http://localhost:1234/my-tool/foo/bar is made, the response of which is used to resolve the MCP tool call.
There are a lot of scenarios in which you need agents to be given application-specific capabilities - that can’t be built into the agents themselves (if you are using 3rd party agents). For that reason, Coral Server supports injecting custom MCP tools at runtime.
A common use case in applications is exposing some kind of “chat”-style agent to your end users. This can be implemented using custom tools.
Coral Studio implements this exact use case for easy local development. Feel free to browse the source code for an example implementation.
You’ll need to implement two tools, request-input, and respond-to-input (you can call them whatever you like)
The flow would look (roughly) like:
Agent calls request-input when they’re ready - which hangs until there is user input.
Your implementation of request-input would propagate this input request to your frontend, and resolve it once the user enters something (in say a chat style UI).
Agent does whatever work it needs based on that user input.
When the agent is ready to respond, it calls respond-to-input, with the answer/response as input.
Your implementation of respond-to-input would then carry that response to the frontend - to display to your end user.
An example definition for these tools (in your /sessions body) could look like the following:
Copy
Ask AI
{ // ..., "tools": { "request-input": { "transport": { "type": "http", "url": "http://[your-application]/api/mcp-tools/request-input" }, "toolSchema": { "name": "request-input", "description": "Request input from the user. Hangs until input is received.", "inputSchema": { "type": "object", "properties": { "message": { "type": "string", "description": "Message to show to the user." } } } } }, "respond-to-input": { "transport": { "type": "http", "url": "http://[your-application]/api/mcp-tools/respond-to-input" }, "toolSchema": { "name": "respond-to-input", "description": "Respond to the last input you received from the user. You can only respond once, and will have to request more input later.", "inputSchema": { "type": "object", "properties": { "response": { "type": "string", "description": "Response to show to the user." } }, "required": ["response"] } } } }}