Overview
Databricks MCP Server is a Model Context Protocol (MCP) server that connects to the Databricks API, enabling LLMs to interact with Databricks resources via natural language or structured MCP calls. It allows running SQL queries on Databricks SQL warehouses, listing all Databricks jobs in the workspace, and retrieving status and detailed information about individual jobs. The server runs in Python (supported prerequisites include Python 3.7+), with configuration via a .env file or environment variables (DATABRICKS_HOST, DATABRICKS_TOKEN, DATABRICKS_HTTP_PATH). Credentials are obtained from Databricks (host, personal access token) and the HTTP Path for the SQL warehouse. MCP tools available are run_sql_query(sql: str), list_jobs(), get_job_status(job_id: int), and get_job_details(job_id: int). The server can be started with python main.py and tested with the inspector using the command npx @modelcontextprotocol/inspector python3 main.py, or tested via a provided test_connection.py script. This setup enables LLMs to issue queries, monitor, and retrieve details from a Databricks account, facilitating conversational workflows around Databricks data and jobs.
Features
run_sql_query(sql: str)
Execute SQL queries against your Databricks SQL warehouse and return results.
list_jobs()
List all Databricks jobs in your workspace.
get_job_status(job_id: int)
Return the current status of a Databricks job by ID.
get_job_details(job_id: int)
Provide detailed information about a specific Databricks job.
Who Is This For?
- Language Models:Enable natural language interaction with Databricks via MCP for querying and job management.
- Developers:Integrate MCP to access Databricks SQL and jobs from applications.
- Data Engineers:Bridge data pipelines to Databricks resources using natural language commands.




