● “Talk with Naren” - Detailed Implementation Plan

Based on my analysis of your Jekyll portfolio, here’s a comprehensive implementation plan:

  1. Core Architecture

Jekyll Site → “Talk with Me” Button → WebLLM Interface → Context-Aware Chat

Technical Stack:

  1. User Classification System

Initial Greeting Logic:

const userClassification = { recruiter: { keywords: [“hire”, “position”, “job”, “opportunity”, “candidate”], greeting: “Looking to hire a robotics engineer? Let me tell you about my experience!”, persona: “professional_showcase” }, colleague: { keywords: [“robotics”, “ROS”, “research”, “technical”, “developer”], greeting: “Fellow robotics enthusiast? Great to meet you!”, persona: “technical_peer” }, explorer: { keywords: [“about”, “projects”, “portfolio”, “curious”], greeting: “Welcome! I’m Naren - what would you like to know?”, persona: “friendly_guide” } }

Dynamic Context Loading:

// Context from your Jekyll data const narenContext = { experience: { current: “Recent M.S. Robotics graduate from ASU”, highlight: “Robotics Software Engineer at Padma Agrobotics”, expertise: [“ROS2”, “Multi-agent systems”, “Autonomous navigation”] }, technical_skills: { programming: [“Python (90%)”, “C/C++ (80%)”, “MATLAB (70%)”], robotics: [“ROS/ROS2 (95%)”, “Gazebo”, “Carla Simulator”], ai_ml: [“TensorFlow”, “PyTorch”, “Computer Vision”] }, projects: { standout: [“GPT-Carla Autonomous Driving”, “Object Goal Navigation”, “Speech-LLM Pipeline”], research: [“IEEE Multi-agent Systems Paper”, “Autonomous Navigation Enhancement”] } }

  1. Conversation Flows by User Type

For Recruiters:

Initial → “What role are you hiring for?” ↓ Role Analysis → Match skills to requirements ↓ Project Showcase → Relevant experience + GitHub links ↓ Next Steps → “Want to see my CV or schedule a call?”

For Technical Colleagues:

Initial → “What aspect of robotics interests you?” ↓ Technical Deep-dive → ROS challenges, research insights ↓ Collaborative Discussion → “Working on anything similar?” ↓ Knowledge Exchange → Papers, resources, war stories

For General Explorers:

Initial → “What brings you to my portfolio?” ↓ Guided Tour → Projects, background, interests ↓ Interactive Q&A → Casual conversation about career/projects ↓ Connection → “Feel free to reach out on LinkedIn!”

  1. Implementation Strategy

Phase 1: Basic Integration

  1. Add “Talk with Me” button to landing page
  2. Create modal interface for chat
  3. Integrate WebLLM with basic context

Phase 2: Context Enhancement

  1. Pre-process all Jekyll data into JSON
  2. Implement semantic search for relevant content
  3. Add project-specific deep-dive capabilities

Phase 3: Advanced Features

  1. Voice input/output integration
  2. Conversation history persistence
  3. Dynamic content linking (navigate to projects)

  4. File Structure

assets/js/ ├── webllm/ │ ├── webllm.js (WebLLM library) │ ├── models/ (quantized model files) │ └── chat-interface.js ├── talk-with-me.js (main integration) └── context-data.json (processed Jekyll data)

_includes/ └── talk-with-me-modal.html

_sass/ └── _talk-with-me.scss

  1. Sample Conversations

Recruiter Scenario:

User: “Hi, I’m looking for a robotics engineer with ROS experience” Naren: “Perfect! I have extensive ROS2 experience from my role at Padma Agrobotics where I worked on autonomous agricultural systems. I’ve published research on multi-agent systems and have 95% proficiency in ROS/ROS2. What specific challenges is your team facing?”

Technical Colleague:

User: “How do you handle sensor fusion in ROS2?” Naren: “Great question! In my Padma Agrobotics project, I integrated GPS and camera data using sensor_msgs. The key was handling different update rates - GPS at 10Hz, camera at 30Hz. I used message filters for temporal synchronization. Have you run into similar timing issues?”

  1. Performance Considerations

● “Talk with Me” Feature Explained

What it means: An AI-powered chat where visitors can have a conversation “with you” - the LLM responds as if it’s Naren, using your portfolio data. Like having a 24/7 personal assistant that knows your projects, skills, and experience.

Performance Analysis & Architecture Recommendation

Your instinct is absolutely correct - loading a 4GB model on every page would devastate performance.

Optimal Architecture:

Floating Chat Button (all pages) → Modal Loads → Model Downloads (lazy)

Implementation Strategy:

  1. Site-wide button: Small CSS/JS snippet added to all pages (~5KB total)
  2. Lazy loading: Model downloads ONLY when user clicks “Talk with Me”
  3. Progressive loading: Show “Preparing to chat…” while model loads
  4. Caching: Once downloaded, model stays cached in browser

Performance Impact:

Example Implementation:

@narendhiran2000 | Roboticists |  AI Developer
💬 Talk with Me

Chat with Naren

👋 Hi! I'm Naren. What brings you here?

Choose your role to get a personalized conversation: