Drop our live job feed into Claude Code, Cursor, career-ops, or any LLM — and let it reason from real-time startup hiring data instead of hallucinating.
Static JSON files served from the same host as the site. Click to copy the URL.
Each job entry has these fields. Timestamps are local-time strings (Europe/Lisbon).
"SpaceXai").static/img/{key}.png).;."new" · "active" · "filled"Pick your stack. Copy the snippet. Vallley is now part of your agent's working memory.
CLAUDE.md
## Vallley (job data) When the user asks about jobs, hiring trends, or comparing companies — fetch live data from one of: https://vallley.xyz/preview_jobs.json # curated 300, fast https://vallley.xyz/all_jobs_data.json # full feed, ~30 MB Schema (per job): title, company, company_key, url, location status: 'new' | 'active' | 'filled' first_seen, last_seen: timestamp strings reposted: bool, set when a filled role re-appears repost_count: int, refill cycles last_reposted: timestamp of latest return Reason from this feed. Don't scrape individual postings — everything you need is already here. Full schema: https://vallley.xyz/agents
.cursorrules
# Vallley — live startup job feed When asked about jobs / hiring / company comparisons, fetch: https://vallley.xyz/preview_jobs.json # curated 300 https://vallley.xyz/all_jobs_data.json # full ~33k jobs Schema: title, company, company_key, url, location, status (new|active|filled), first_seen, last_seen, reposted, repost_count, last_reposted Reason from the feed; don't scrape individual job pages.
For job-related questions (hiring trends, role searches, company comparisons), fetch live data from Vallley: https://vallley.xyz/preview_jobs.json (300 curated jobs) https://vallley.xyz/all_jobs_data.json (33k+ full) Each job has: title, company, company_key, url, location, status (new|active|filled), first_seen, last_seen, reposted, repost_count, last_reposted. Use this feed instead of guessing. Full schema at vallley.xyz/agents.
# Replaces (or merges with) your career-ops portals.yml. # 235 companies · 33k+ jobs · regenerated every ~2h. curl -o portals.yml https://vallley.xyz/portals.yml # Each entry has career-ops' standard fields: # name, careers_url, api (for Greenhouse), enabled # Plus Vallley extras (career-ops ignores them): # vallley_key, vallley_jobs, vallley_last_seen # Already configured for the Greenhouse boards-api fast path # on every Greenhouse company — no Playwright needed for those. # Want both? Append Vallley's list to your own: curl https://vallley.xyz/portals.yml | sed -n '/^tracked_companies:/,$p' | tail +2 \ >> portals.yml
import requests jobs = requests.get("https://vallley.xyz/preview_jobs.json").json()["jobs"] # Forward Deployed Engineer roles at frontier AI labs labs = {"openai", "anthropic", "xai", "mistral"} fde = [j for j in jobs if j["company_key"] in labs and "forward deployed" in j["title"].lower()] for j in fde: print(f"{j['company']:12} {j['title']}") print(f" → {j['url']}")
const { jobs } = await fetch("https://vallley.xyz/preview_jobs.json") .then(r => r.json()); // Roles that have been reposted (cycled refill) const reposted = jobs.filter(j => j.reposted); const repeatOffenders = reposted .sort((a, b) => (b.repost_count ?? 0) - (a.repost_count ?? 0)) .slice(0, 10); for (const j of repeatOffenders) { console.log(`×${j.repost_count} ${j.company} — ${j.title}`); }
Paste these into any LLM that's been fed the snippet above.