● “Talk with Naren” - Detailed Implementation Plan
Based on my analysis of your Jekyll portfolio, here’s a comprehensive implementation plan:
- Core Architecture
Jekyll Site → “Talk with Me” Button → WebLLM Interface → Context-Aware Chat
Technical Stack:
- WebLLM: Llama-2-7B-Chat (quantized for browser)
- Context Engine: Pre-processed Jekyll data as JSON
- UI Framework: Vanilla JS + CSS (matches your existing theme)
- Storage: localStorage for conversation persistence
- User Classification System
Initial Greeting Logic:
const userClassification = { recruiter: { keywords: [“hire”, “position”, “job”, “opportunity”, “candidate”], greeting: “Looking to hire a robotics engineer? Let me tell you about my experience!”, persona: “professional_showcase” }, colleague: { keywords: [“robotics”, “ROS”, “research”, “technical”, “developer”], greeting: “Fellow robotics enthusiast? Great to meet you!”, persona: “technical_peer” }, explorer: { keywords: [“about”, “projects”, “portfolio”, “curious”], greeting: “Welcome! I’m Naren - what would you like to know?”, persona: “friendly_guide” } }
Dynamic Context Loading:
// Context from your Jekyll data const narenContext = { experience: { current: “Recent M.S. Robotics graduate from ASU”, highlight: “Robotics Software Engineer at Padma Agrobotics”, expertise: [“ROS2”, “Multi-agent systems”, “Autonomous navigation”] }, technical_skills: { programming: [“Python (90%)”, “C/C++ (80%)”, “MATLAB (70%)”], robotics: [“ROS/ROS2 (95%)”, “Gazebo”, “Carla Simulator”], ai_ml: [“TensorFlow”, “PyTorch”, “Computer Vision”] }, projects: { standout: [“GPT-Carla Autonomous Driving”, “Object Goal Navigation”, “Speech-LLM Pipeline”], research: [“IEEE Multi-agent Systems Paper”, “Autonomous Navigation Enhancement”] } }
- Conversation Flows by User Type
For Recruiters:
Initial → “What role are you hiring for?” ↓ Role Analysis → Match skills to requirements ↓ Project Showcase → Relevant experience + GitHub links ↓ Next Steps → “Want to see my CV or schedule a call?”
For Technical Colleagues:
Initial → “What aspect of robotics interests you?” ↓ Technical Deep-dive → ROS challenges, research insights ↓ Collaborative Discussion → “Working on anything similar?” ↓ Knowledge Exchange → Papers, resources, war stories
For General Explorers:
Initial → “What brings you to my portfolio?” ↓ Guided Tour → Projects, background, interests ↓ Interactive Q&A → Casual conversation about career/projects ↓ Connection → “Feel free to reach out on LinkedIn!”
- Implementation Strategy
Phase 1: Basic Integration
- Add “Talk with Me” button to landing page
- Create modal interface for chat
- Integrate WebLLM with basic context
Phase 2: Context Enhancement
- Pre-process all Jekyll data into JSON
- Implement semantic search for relevant content
- Add project-specific deep-dive capabilities
Phase 3: Advanced Features
- Voice input/output integration
- Conversation history persistence
-
Dynamic content linking (navigate to projects)
- File Structure
assets/js/ ├── webllm/ │ ├── webllm.js (WebLLM library) │ ├── models/ (quantized model files) │ └── chat-interface.js ├── talk-with-me.js (main integration) └── context-data.json (processed Jekyll data)
_includes/ └── talk-with-me-modal.html
_sass/ └── _talk-with-me.scss
- Sample Conversations
Recruiter Scenario:
User: “Hi, I’m looking for a robotics engineer with ROS experience” Naren: “Perfect! I have extensive ROS2 experience from my role at Padma Agrobotics where I worked on autonomous agricultural systems. I’ve published research on multi-agent systems and have 95% proficiency in ROS/ROS2. What specific challenges is your team facing?”
Technical Colleague:
User: “How do you handle sensor fusion in ROS2?” Naren: “Great question! In my Padma Agrobotics project, I integrated GPS and camera data using sensor_msgs. The key was handling different update rates - GPS at 10Hz, camera at 30Hz. I used message filters for temporal synchronization. Have you run into similar timing issues?”
- Performance Considerations
- Lazy Loading: WebLLM loads only when chat is opened
- Model Size: ~4GB quantized model (acceptable for modern browsers)
- Caching: Context data cached in localStorage
- Fallback: Text-only mode for slower connections
● “Talk with Me” Feature Explained
What it means: An AI-powered chat where visitors can have a conversation “with you” - the LLM responds as if it’s Naren, using your portfolio data. Like having a 24/7 personal assistant that knows your projects, skills, and experience.
Performance Analysis & Architecture Recommendation
Your instinct is absolutely correct - loading a 4GB model on every page would devastate performance.
Optimal Architecture:
Floating Chat Button (all pages) → Modal Loads → Model Downloads (lazy)
Implementation Strategy:
- Site-wide button: Small CSS/JS snippet added to all pages (~5KB total)
- Lazy loading: Model downloads ONLY when user clicks “Talk with Me”
- Progressive loading: Show “Preparing to chat…” while model loads
- Caching: Once downloaded, model stays cached in browser
Performance Impact:
- Page load: +5KB (negligible)
- First chat: 30-60 seconds model download (one-time)
- Subsequent chats: Instant (cached)
Example Implementation:
- Add floating button to _layouts/default.html
- Button triggers modal that loads WebLLM on-demand
- No performance impact until user actually wants to chat