Skip to content

davidcocc/GraphNBass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph 'N' Bass

Python 3.10+

LangChain LangGraph Ollama Spotify graph

Explore music scenes as graphs using web search, Spotify metadata, and an LLM-driven extraction pipeline. The app builds a network of artists and renders it in an interactive web UI, along with a narrative report.

Overview

  • Input: a seed query describing a music scene (e.g., "UK '70s post punk").
  • Pipeline: web search → artist extraction → iterative expansion via related artists → Spotify genres context → graph building → final report generation.
  • Output: an interactive graph (nodes = artists; links = undirected relations) and a short report summarizing the scene.

Technology Stack

  • Backend

    • FastAPI: HTTP API and static file hosting.
    • Uvicorn: ASGI server.
    • LangChain + LangChain Community: Prompting and model integration.
    • LangGraph: Orchestrates the multi-step workflow as a state machine/graph.
    • Ollama Chat via ChatOllama: Local LLM runtime (configurable via env).
    • Tavily: Web search API for scene discovery and related artists content.
    • Spotipy: Spotify Web API client (used here for artist genres).
  • Frontend

    • D3.js: Force-directed graph visualization with zoom/pan, drag, pin/unpin, tooltips.

Project Structure

  • src/main.py: FastAPI application (API + static hosting).
  • src/agent.py: LangGraph workflow nodes and wiring (bootstrap search → iterate → report).
  • src/tools.py: Tavily and Spotify tools for search and metadata.
  • src/llm.py: LLM configuration (Ollama chat model; JSON mode for structured extraction).
  • src/state.py: Shared state type used by the workflow.
  • static/index.html: Web UI (graph viewer + controls + final report).
  • requirements.txt: Python dependencies.

Pipeline (High Level)

  1. Bootstrap scene discovery

    • Tavily search for "main artists for music scene".
    • LLM extracts artist names from the returned text (JSON-structured extraction).
    • Initialize the graph with these artists as nodes.
  2. Iterative expansion

    • Pop one artist from the queue as current_artist.
    • Fetch context:
      • Spotify genres (via Spotipy; client-credentials flow).
      • Tavily web results for "artists related to or influenced by <current_artist>".
    • LLM extracts:
      • new_artists: new nodes to add to queue/graph.
      • new_links: undirected links {source, target, type, confidence} (defaults: type="related", confidence≈0.6 if not provided).
    • Continue until the queue empties or max_loops is reached.
  3. Final report

    • LLM generates a short narrative report based on the full graph and the seed query.

Link Semantics

  • Links are currently modeled as undirected relations in the UI (no arrowheads).
  • Each link carries:
    • type: relation type (default "related").
    • confidence: a float in [0,1]; the UI maps this to stroke width.
  • Tooltips show type and confidence on hover.

API

POST /explore

Request body:

{
  "seed_query": "UK '70s post punk",
  "max_loops": 10
}

Response body (shape):

{
  "graph_data": {
    "nodes": [{ "id": "Joy Division" }, { "id": "Bauhaus" }],
    "links": [
      { "source": "Joy Division", "target": "Bauhaus", "type": "related", "confidence": 0.7, "weight": 0.7 }
    ]
  },
  "report": "Joy Division are considered among the most iconic bands..."
}

Frontend

  • Served at / (and assets under /static).
  • Features:
    • Form to submit a seed query and loop count.
    • Force graph with zoom/pan, drag, pin/unpin, neighbor highlighting, window resize handling.
    • Link tooltips showing relation type and confidence; node tooltip shows name (and genres when available).
    • Final report shown in the right sidebar.

Setup

Prerequisites

  • Python 3.10+ (tested on 3.13) and a working Ollama installation if using ChatOllama.

Installation

python -m venv venv

# Windows PowerShell
venv\\Scripts\\Activate.ps1

# macOS/Linux
source venv/bin/activate

pip install -r requirements.txt

Environment Variables

Create a .env file in the project root with:

# LLM
LLM_MODE=OLLAMA
OLLAMA_MODEL=llama3.1:latest   # example model; ensure it's available in your Ollama

# Tavily
TAVILY_API_KEY=your_tavily_api_key

# Spotify (Client Credentials, you may need a Spotify Premium subscription to access these APIs)
SPOTIPY_CLIENT_ID=your_spotify_client_id
SPOTIPY_CLIENT_SECRET=your_spotify_client_secret

Notes:

  • src/llm.py expects LLM_MODE=OLLAMA and OLLAMA_MODEL to be set.
  • If Ollama is not running or the model is missing, LLM calls will fail.

Run

Recommended (module style):

uvicorn src.main:app --reload

Alternatively (script style):

python src/main.py

Open the UI:

http://localhost:8000/

What’s Implemented

  • End-to-end pipeline from seed query to graph + report.
  • Undirected links with per-link type and confidence (defaulted when the LLM cannot determine).
  • Interactive graph viewer with meaningful interactions and link tooltips.
  • Spotify genres for context (used in prompts; future UI enhancements planned).

Future Improvements

  • Enrich link semantics (clearer types plus direction only when meaningful, e.g., influence).
  • Side panel with artist details: genres, Spotify profile, top tracks, audio previews.
  • Caching and rate limiting for external APIs.
  • Alternative LLM backends (e.g., OpenAI, Anthropic) via LangChain configuration.
  • Persist graphs and reports for later retrieval and comparison.

About

Personal Project to learn LangGraph and LangChain. Music stuff.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published