Skip to content
Discussion options

You must be logged in to vote

Great question! The nice thing about MCP is that it doesn’t really matter where your LLM is running, e.g., cloud, on-prem, or even on your laptop. The MCP server just exposes ROS topics, services, and actions. On the other side, it’s your agent/runtime (the MCP client) that needs to know how to talk to both the LLM API and the MCP server. So you can point it to vLLM, Ollama, LM Studio, or any local API you host yourself.

In terms of requirements: the LLM itself doesn’t have to be modified or “embedded” with MCP. The only real requirement is that the runtime you’re using supports structured tool/function calls (JSON in/out), since that’s how MCP describes ROS interfaces. That’s a much lowe…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by EmmanuelMess
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants