A premium React + TypeScript portfolio built as an applied-intelligence product showcase rather than a conventional portfolio page.
This project uses React + TypeScript + Vite + Tailwind + Framer Motion.
Vite was chosen intentionally for production polish and for compatibility with the current RunAnywhere Web SDK guidance:
- The SDK documentation currently recommends Vite for web projects.
- Vite makes the RunAnywhere WASM asset workflow explicit and controllable.
- Cross-origin isolation headers, worker format, and
optimizeDeps.excludeare easier to keep aligned with the SDK’s current browser-runtime requirements. - The app remains lightweight while still giving us a modular, production-grade React architecture.
- Software Engineer
- AI Engineer
- Agentic Systems Builder
- Applied Intelligence Product Engineer
The experience is meant to signal engineering depth, interface discipline, runtime awareness, and product maturity before a visitor reads much text.
src/app/App.tsx- Sticky command-style navigation
- Editorial single-page flow with premium dark visual system
- Section-based composition instead of template sections
src/content/portfolio.ts- Central source for brand copy, featured systems, architecture layers, build modes, philosophy, GitHub identity context, and portfolio agent knowledge
- Makes the site easy to extend without hardcoding narrative copy directly into components
src/styles/globals.csssrc/components/ui/button.tsxsrc/components/ui/section-heading.tsxsrc/components/ui/surface.tsx
The visual language is restrained dark technical luxury:
- layered graphite backgrounds
- off-white and muted silver text
- one controlled cool accent
- large display typography with editorial spacing
- utility surfaces for premium panels instead of generic cards
- Hero + intelligence map
- Capability strip
- Selected systems
- Agentic architecture blueprint
- Build-anything-with-AI selector
- RunAnywhere runtime control panel
- Advanced portfolio agent
- GitHub intelligence layer
- Build philosophy
- Premium CTA
RunAnywhere is not presented as a fake badge or a future feature. The site includes a real browser-local model lifecycle layer.
src/features/runanywhere/model-catalog.tssrc/features/runanywhere/sdk.tssrc/features/runanywhere/runtime-manager.tssrc/features/runanywhere/runtime-provider.tsxsrc/components/sections/local-runtime-section.tsx
The local model flow is architected like this:
- The runtime initializes lazily after first idle time.
- RunAnywhere core and the Llama.cpp backend are loaded dynamically.
- The local language model is registered if it is not already in the SDK catalog.
- When the user clicks activation:
- if the model is missing,
ModelManager.downloadModel()starts automatically - download progress is surfaced in the UI
- the model is stored in OPFS by the SDK
- the model is then loaded into memory with
ModelManager.loadModel()
- if the model is missing,
- On later visits, the SDK detects the existing OPFS cache and the UI moves straight to ready/load behavior without re-downloading.
The UI exposes these states clearly:
not-downloadeddownloadingreadyloadingactivefailed
The SDK persists downloaded models in the browser’s Origin Private File System (OPFS). This means:
- models survive page refreshes
- models survive browser restarts
- the portfolio can reuse the cached model in later sessions
- the user sees a real browser-local activation workflow instead of a mock
vite.config.tscopies required WASM binaries intodist/assetsfor productionoptimizeDeps.excludeis set for the RunAnywhere WASM packages- cross-origin headers are configured both in dev and in
vercel.json
This follows the current RunAnywhere documentation for Vite bundling, WASM discovery, and credentialless cross-origin isolation.
The portfolio agent now also supports a faster Groq-backed testing mode so you do not need to wait for the local model download during development.
api/groq.jssrc/features/agent/groq-provider.tssrc/components/sections/portfolio-agent-section.tsxvite.config.ts
Use a local .env file with:
GROQ_API_KEY=your_key_here
GROQ_MODEL=llama-3.1-8b-instant
VITE_GROQ_PROXY_URL=GROQ_API_KEY intentionally stays server-side:
- in local development, Vite serves
/api/groqthrough a dev middleware proxy - in Vercel deployments,
api/groq.jshandles the same route
This means the key does not need to be exposed to the browser as VITE_*.
For static hosting, VITE_GROQ_PROXY_URL is optional and should point to an external proxy endpoint if you want Groq available in production. If it is omitted, the deployed site uses the local RunAnywhere runtime or the curated fallback agent path instead of trying to call /api/groq.
The portfolio agent supports four inference paths:
Groq TestAuto RouteLocal RuntimeFallback Only
For your current testing flow, Groq Test is the fastest path.
This repository now includes a Render Blueprint:
It configures:
- a static frontend build with
npm install && npm run build distas the static publish directory- a rewrite from
/*to/index.htmlfor SPA routing - a separate lightweight Groq proxy service on Render
Cross-Origin-Opener-Policy: same-originCross-Origin-Embedder-Policy: credentialless
Those headers matter because the RunAnywhere browser runtime depends on cross-origin isolation for the best local-model experience.
For Render static deployment:
- Connect
https://github.com/FiscalMindset/algsochvicky.gitin Render. - Create a Blueprint or Static Site from the repo.
- Let Render use the root
render.yaml. - Set
GROQ_API_KEYon thealgsochvicky-groqservice when Render asks for unsynced env vars. - Deploy the default branch.
Important:
api/groq.jsis still used for Vercel deployments- local development still uses the Vite
/api/groqmiddleware path - Render live deployment now keeps Groq available through the separate
algsochvicky-groqproxy service - the static frontend reads
VITE_GROQ_PROXY_URL, so live Groq stays fast without exposing the secret in the browser
The portfolio agent is intentionally more structured than a normal landing-page chatbot.
src/features/agent/engine.tssrc/features/agent/types.tssrc/components/sections/portfolio-agent-section.tsx
The agent currently supports:
- Recruiter Mode
- Client Mode
- Technical Deep Dive
- Project Explorer
- AI Capability Mode
- Detect likely audience intent from the question
- Retrieve the highest-signal evidence from the curated knowledge base
- Select the most relevant systems
- Build reasoning notes and follow-up suggestions
- If the local model is active, generate the final phrasing on-device using a grounded prompt
- If local inference is inactive, fall back to deterministic portfolio synthesis instead of a fake LLM
This means the UX remains useful even when a local model has not been activated.
This portfolio intentionally avoids dumping every repository.
fiscalmindsetis treated as the canonical active GitHub identityalgsochis treated as legacy historical context because the account is suspended
Repository signal is ranked by:
- manual featured priority
- execution depth
- AI depth
- product signal
- completeness
- recency
The current site uses curated fallback data, which is the right baseline when live GitHub API integration is unavailable or legacy-account behavior is unreliable.
The main customization point is:
You can update:
- hero positioning
- featured systems
- build modes
- philosophy
- GitHub identity details
- suggested prompts
- contact CTA actions
npm install
npm run devnpm run build- The project is dark-mode native by design.
- The local runtime depends on browser support for the required WASM and cross-origin isolation behavior.
- The default featured-system and GitHub content is curated fallback data meant to stay high-signal even before live APIs are connected.