Skip to content

Conversation

@Naren456
Copy link
Contributor

@Naren456 Naren456 commented Dec 31, 2025

Closes #

📝 Description

This PR refactors the [ChatScreen] component to improve code quality, state management, and user experience.

🔧 Changes Made

Component Modularization

We split the large [ChatScreen.jsx] file into specific functional components located in src/Components/ChatScreen/:

  • ChatHeader: Handles the top bar, navigation, and model mode switching.
  • MessageList: Manages the rendering of the chat history and auto-scroll logic.
  • ChatInput: encapsulation of the text input area, send button, and attachment handling.
  • EmptyState: Displays the initial "How can I help you?" view with suggestions.
  • QuickReplies: Manages the floating suggestion chips for follow-up questions.
  • TypingIndicator: A dedicated component for the "BabyNest is thinking..." animation.
  • UseChatEngine : A hook that manages the agent and model.

Logic & State Refactor ([ChatScreen.jsx]

  • Controller Role: [ChatScreen.jsx] now acts as the main controller, managing state and passing props to the child components.
  • State Management: Refactored model loading state (progress, downloading status) to use useReducer for cleaner logic.
  • UI UX: Fixed the positioning of the Thinking Indicator by integrating it as a footer in the MessageList, ensuring it scrolls naturally with messages.

📷 Screenshots or Visual Changes (if applicable)

Screenshot from 2025-12-31 23-39-45 Screenshot from 2025-12-31 23-39-38

Video : https://drive.google.com/drive/folders/1shZI6mKrruf4Kg-hpjFUZVhpAPHYVoN3?usp=sharing

🤝 Collaboration

Collaborated with: na

✅ Checklist

  • I have read the contributing guidelines.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if applicable).
  • Any dependent changes have been merged and published in downstream modules.

Summary by CodeRabbit

  • New Features

    • Modular chat UI: header, input, message list, empty state, quick replies, typing indicator.
    • Persistent conversation storage, paste/quick-reply shortcuts, retry on model errors, delete conversation, auto-scroll, and copy messages.
    • Mode toggle with model download/init lifecycle, progress/status, and backend fallback for generation.
  • Refactor

    • Chat flow centralized into a hook-driven, component-based architecture with simplified screen logic.
  • Style

    • Updated pink-white theme and adjusted spacing/styling for headers and inputs.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 31, 2025

📝 Walkthrough

Walkthrough

Introduces a componentized chat flow with a model lifecycle (download/init/unload), a new useChatEngine hook for persistence and multi-path message generation (RAG, local model, backend fallback), six new chat UI components, theme color adjustments, and HeaderWithBack/ProfileScreen UI tweaks.

Changes

Cohort / File(s) Summary
ChatScreen refactor
Frontend/src/Screens/ChatScreen.jsx
Major rewrite: adds model reducer/lifecycle (download/progress/init/unload), wires useChatEngine for conversation/send flows, centralizes RAG/local/backend fallback handling, AsyncStorage hydration/persistence, expanded quick-reply derivation, and composes new chat UI components.
Chat UI components
Frontend/src/Components/ChatScreen/ChatHeader.jsx, Frontend/src/Components/ChatScreen/ChatInput.jsx, Frontend/src/Components/ChatScreen/EmptyState.jsx, Frontend/src/Components/ChatScreen/MessageList.jsx, Frontend/src/Components/ChatScreen/QuickReplies.jsx, Frontend/src/Components/ChatScreen/TypingIndicator.jsx
Adds six components: header (back/toggle/clear), pasteable multiline input with model/error-aware UI and retry, mode-aware empty state with quick replies, FlatList message renderer with avatar/Markdown/copy, quick-reply buttons, and animated typing indicator.
Chat engine hook
Frontend/src/hooks/useChatEngine.js
New hook managing conversation hydration/persistence, clearConversation, sendMessage with RAG/local/backend fallback, pending follow-ups, isGenerating, and AsyncStorage integration.
Model API updates
Frontend/src/model/model.jsx
Adds unloadModel; makes download/load return explicit booleans/null on failure; adds caching, timing instrumentation, input validation, reduces n_predict, and stores/returns cached responses in generateResponse.
Header & Profile UI
Frontend/src/Components/HeaderWithBack.jsx, Frontend/src/Screens/ProfileScreen.jsx
HeaderWithBack: adds title, reduces padding/elevation, layout adjustments. ProfileScreen: switches to HeaderWithBack, adjusts SafeArea/layout and section backgrounds to #FFF5F8.
Theme updates
Frontend/src/theme/ThemeContext.js
Default theme color values changed (e.g., primaryrgb(218,79,122), background#FFF5F8); ThemeProvider initial state set to themes.default.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI as ChatScreen UI
    participant Hook as useChatEngine
    participant Model as Local Model
    participant RAG as RAG Service
    participant Backend as Backend /agent
    participant Storage as AsyncStorage

    Note over UI,Model: App mount / init
    UI->>Model: downloadModel()/loadModel()
    Model-->>UI: progress / init result

    UI->>Storage: load chat_history
    Storage-->>Hook: conversation data
    Hook-->>UI: hydrated conversation

    User->>UI: sendMessage(text)
    UI->>Hook: sendMessage(text, useRAGMode, initializeContext)
    Hook-->>UI: append user message, set isGenerating

    alt use RAG mode
        Hook->>RAG: query with conversationContext
        alt RAG handled
            RAG-->>Hook: assistant response
        else not handled
            Hook->>Backend: POST /agent
            Backend-->>Hook: agent response
        end
    else local model mode
        Hook->>Model: generateResponse(context)
        alt model produced
            Model-->>Hook: assistant response
        else model error
            Hook->>Backend: POST /agent (fallback)
            Backend-->>Hook: agent response
        end
    end

    Hook->>Storage: persist updated chat_history
    Hook-->>UI: assistant message (render)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 Hop, hop—new headers soft and bright,
Buttons, dots, and quick replies take flight,
Models download, then wisdom hums at night,
Pink petals guide each message into light,
A rabbit cheers: "Chats bloom — what a delight!"

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title "Redesign ChatScreen" directly reflects the main change: a comprehensive refactoring of ChatScreen.jsx into modular components with improved state management.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
Frontend/src/theme/ThemeContext.js (1)

62-64: Fix theme initialization to use the updated default theme.

The default theme colors were updated (lines 7-19), but the ThemeProvider still initializes with themes.light on Line 63. This means the new default theme will never be applied unless users explicitly switch to it. This appears to be an oversight.

🔎 Proposed fix
 export const ThemeProvider = ({ children }) => {
-  const [theme, setTheme] = useState(themes.light);
+  const [theme, setTheme] = useState(themes.default);
Frontend/src/Screens/ChatScreen.jsx (2)

270-280: Bug: Using userInput instead of textToSend in fallback agent call.

The fallback agent request uses userInput (line 277) which was already cleared at line 176. This will send an empty string to the backend agent.

🔎 Proposed fix
           const agentResponse = await fetch(`${BASE_URL}/agent`, {
             method: 'POST',
             headers: {
               'Content-Type': 'application/json',
             },
             body: JSON.stringify({
-              query: userInput,
+              query: textToSend,
               user_id: "default"
             }),
           });

380-382: Duplicate medicine_name case - unreachable code.

This case is already handled at lines 346-349. The duplicate here will never execute.

🔎 Proposed fix - remove duplicate
         case 'action_type':
           replies.push('Last', 'Weight', 'Appointment', 'Sleep');
           break;
-        case 'medicine_name':
-          replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium');
-          break;
         case 'frequency':
🧹 Nitpick comments (9)
Frontend/src/Components/HeaderWithBack.jsx (1)

14-17: Consider using flex for cleaner centering (optional).

The spacer view with fixed width works, but you could simplify the layout by removing the spacer and using flex: 1 on an empty View before the title, or by adjusting the title's flex styling to center it naturally within the available space.

Alternative approach using flex
-  
-      <Text style={styles.title}>{title}</Text>
-  
-      <View style={{ width: 40 }} /> 
+      <Text style={styles.title}>{title}</Text>
+      <View style={{ width: 24 }} /> {/* Match back button visual width */}

This maintains visual balance without relying on a magic number, or alternatively, use absolute positioning for the title.

Frontend/src/Components/ChatScreen/QuickReplies.jsx (1)

11-21: Prefer stable keys over array indices.

Using array indices as keys (Line 13) can cause issues if the replies array is reordered or items are added/removed. If replies have unique text content or IDs, use those instead.

🔎 Suggested improvement

If reply text is unique, use it as the key:

-        {replies.map((reply, index) => (
+        {replies.map((reply) => (
           <TouchableOpacity
-            key={index}
+            key={reply}
             style={styles.quickReplyButton}
             onPress={() => handleQuickReply(reply)}
           >

If replies are objects with IDs, use reply.id instead.

Frontend/src/Components/ChatScreen/MessageList.jsx (1)

7-13: Unused theme parameter in createMarkdownStyles.

The function accepts theme but returns hardcoded colors. Either use theme values for consistency with the app's theming system, or remove the unused parameter.

🔎 Proposed fix
-  const createMarkdownStyles = (theme) => ({
-    body: { color: "#000000", fontSize: 16, lineHeight: 24 },
-    heading1: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 },
-    heading2: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 },
-    strong: { fontWeight: "bold", color: "rgb(218,79,122)" },
+  const createMarkdownStyles = () => ({
+    body: { color: "#000000", fontSize: 16, lineHeight: 24 },
+    heading1: { color: theme.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 },
+    heading2: { color: theme.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 },
+    strong: { fontWeight: "bold", color: theme.primary || "rgb(218,79,122)" },
     list_item: { marginVertical: 5 },
   });
Frontend/src/Components/ChatScreen/EmptyState.jsx (1)

52-58: Remove unused emptyStateLogo style.

This style is defined but never referenced in the component.

🔎 Proposed fix
 const styles = StyleSheet.create({
   emptyStateContainer: {
     flexGrow: 1,
     justifyContent: 'center',
     alignItems: 'center',
     paddingHorizontal: 20,
     paddingBottom: 50,
   },
-  emptyStateLogo: {
-    width: 60,
-    height: 60,
-    marginBottom: 20,
-    tintColor: 'rgb(218,79,122)',
-    opacity: 0.8,
-  },
   emptyStateTitle: {
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)

39-47: Duplicate marginLeft on send button.

Line 41 adds inline marginLeft: 8 but the sendButton style (line 91) already has marginLeft: 5. The inline style overrides it, making the style definition dead code.

🔎 Proposed fix - remove inline marginLeft
            <TouchableOpacity 
-              style={[styles.sendButton, { marginLeft: 8 }, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} 
+              style={[styles.sendButton, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} 
               onPress={handleSendMessage}
               disabled={isGenerating || (!useRAGMode && !isModelReady)}
             >

Then update the style if 8px is preferred:

   sendButton: {
     backgroundColor: "rgb(218,79,122)",
     padding: 8,
     borderRadius: 20,
-    marginLeft: 5,
+    marginLeft: 8,
   },

21-23: Consider adding accessibility labels.

The paste and send buttons lack accessibilityLabel props, which would improve screen reader support.

🔎 Example
-          <TouchableOpacity style={styles.iconButton} onPress={handlePaste}>
+          <TouchableOpacity style={styles.iconButton} onPress={handlePaste} accessibilityLabel="Paste from clipboard">
             <Icon name="content-paste" size={20} color="rgba(0,0,0,0.4)"/>
           </TouchableOpacity>
           <TouchableOpacity 
             style={[styles.sendButton, { marginLeft: 8 }, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} 
             onPress={handleSendMessage}
             disabled={isGenerating || (!useRAGMode && !isModelReady)}
+            accessibilityLabel="Send message"
           >

Also applies to: 40-46

Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)

28-32: Minor inconsistency in disabled state logic.

Line 29 uses truthy check (conversationLength ? 1 : 0.5) while line 31 uses strict equality (conversationLength === 0). If conversationLength were undefined, the button would appear disabled (opacity 0.5) but remain clickable.

🔎 Proposed fix for consistency
       <TouchableOpacity 
-        style={[styles.headerButton, { marginLeft: 8, opacity: conversationLength ? 1 : 0.5 }]}
+        style={[styles.headerButton, { marginLeft: 8, opacity: conversationLength > 0 ? 1 : 0.5 }]}
         onPress={clearConversation}
-        disabled={conversationLength === 0}
+        disabled={!conversationLength || conversationLength === 0}
       >
Frontend/src/Screens/ChatScreen.jsx (2)

116-146: Consider adding cleanup for model initialization.

If the component unmounts during model download, dispatch calls will attempt to update unmounted component state. Add a cleanup function to prevent this.

🔎 Proposed fix
   useEffect(() => {
+    let isMounted = true;
+    
     const initModel = async () => {
       try {
-        dispatch({ type: 'START_INIT' });
+        if (isMounted) dispatch({ type: 'START_INIT' });
         console.log(`Initializing model ${GGUF_FILE}...`);
         
-        dispatch({ type: 'SET_DOWNLOADING', payload: true });
-        dispatch({ type: 'SET_PROGRESS', payload: 0 });
+        if (isMounted) dispatch({ type: 'SET_DOWNLOADING', payload: true });
+        if (isMounted) dispatch({ type: 'SET_PROGRESS', payload: 0 });

-        const success = await downloadModel(GGUF_FILE, (p) => dispatch({ type: 'SET_PROGRESS', payload: p }));
-        dispatch({ type: 'SET_DOWNLOADING', payload: false });
+        const success = await downloadModel(GGUF_FILE, (p) => {
+          if (isMounted) dispatch({ type: 'SET_PROGRESS', payload: p });
+        });
+        if (!isMounted) return;
+        dispatch({ type: 'SET_DOWNLOADING', payload: false });
         
         if (success) {
           dispatch({ type: 'INIT_SUCCESS' });
         } else {
           dispatch({ type: 'INIT_FAILURE' });
         }
       } catch (error) {
-        Alert.alert("Error", "Failed to init model: " + error.message);
+        if (isMounted) Alert.alert("Error", "Failed to init model: " + error.message);
         console.error(error);
-        dispatch({ type: 'INIT_FAILURE' });
+        if (isMounted) dispatch({ type: 'INIT_FAILURE' });
       }
     };
     initModel();
+    
+    return () => {
+      isMounted = false;
+    };
   }, []);

87-105: Consider adding data validation when loading chat history.

The parsedChats.forEach assumes the stored data has the expected structure. Corrupted AsyncStorage data could cause runtime errors.

🔎 Proposed fix
         if (storedChats) {
           const parsedChats = JSON.parse(storedChats);
+          if (!Array.isArray(parsedChats)) {
+            console.warn('Invalid chat history format, resetting...');
+            return;
+          }
           setConversation(parsedChats);
           
           // Hydrate conversation context for memory
-          parsedChats.forEach(msg => {
-            conversationContext.addMessage(msg.role, msg.content);
+          parsedChats.forEach((msg) => {
+            if (msg?.role && msg?.content) {
+              conversationContext.addMessage(msg.role, msg.content);
+            }
           });
         }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2d9f82 and 7f3c925.

📒 Files selected for processing (10)
  • Frontend/src/Components/ChatScreen/ChatHeader.jsx
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/Components/ChatScreen/EmptyState.jsx
  • Frontend/src/Components/ChatScreen/MessageList.jsx
  • Frontend/src/Components/ChatScreen/QuickReplies.jsx
  • Frontend/src/Components/ChatScreen/TypingIndicator.jsx
  • Frontend/src/Components/HeaderWithBack.jsx
  • Frontend/src/Screens/ChatScreen.jsx
  • Frontend/src/Screens/ProfileScreen.jsx
  • Frontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (6)
Frontend/src/Components/ChatScreen/MessageList.jsx (2)
Frontend/src/theme/ThemeContext.js (1)
  • theme (63-63)
Frontend/src/Screens/ChatScreen.jsx (1)
  • flatListRef (62-62)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (2)
  • styles (487-509)
  • navigation (50-50)
Frontend/src/Components/HeaderWithBack.jsx (2)
  • styles (22-42)
  • navigation (7-7)
Frontend/src/Components/ChatScreen/ChatInput.jsx (5)
Frontend/src/Screens/ChatScreen.jsx (4)
  • styles (487-509)
  • handlePaste (319-322)
  • useRAGMode (60-60)
  • userInput (59-59)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
  • styles (43-65)
Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
  • styles (44-97)
Frontend/src/Components/ChatScreen/MessageList.jsx (1)
  • styles (53-100)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
  • styles (39-83)
Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (2)
  • useRAGMode (60-60)
  • handleQuickReply (424-426)
Frontend/src/Components/ChatScreen/QuickReplies.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (1)
  • handleQuickReply (424-426)
Frontend/src/Components/HeaderWithBack.jsx (1)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
  • styles (43-65)
🔇 Additional comments (7)
Frontend/src/Screens/ProfileScreen.jsx (1)

69-70: LGTM! Clean refactor to reusable header component.

The adoption of HeaderWithBack simplifies the component structure and aligns with the broader UI refactor. The theme color updates are consistent with the changes in ThemeContext.js.

Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)

30-32: Note: All dots pulse in unison.

All three dots use the same fadeAnim opacity value, so they pulse together rather than in a staggered sequence. If you intended a "wave" effect where dots animate one after another, you would need separate Animated.Values with delayed start times.

This is likely intentional design, but flagging for awareness.

Frontend/src/Components/ChatScreen/MessageList.jsx (1)

44-48: LGTM with optional optimization note.

The FlatList setup is correct. Both onContentSizeChange and onLayout triggering scrollToEnd ensures reliable auto-scroll but may cause redundant scroll calls. Consider debouncing or using a single trigger if you notice scroll jitter.

Frontend/src/Components/ChatScreen/EmptyState.jsx (1)

5-41: LGTM!

Clean implementation with mode-aware task switching. The component correctly handles both RAG and Model modes with appropriate UI feedback.

Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)

5-41: LGTM!

Well-structured header component with clean prop handling and appropriate visual feedback for disabled states.

Frontend/src/Screens/ChatScreen.jsx (2)

25-47: LGTM!

Good use of useReducer for managing model initialization state. The reducer pattern cleanly handles the download/ready lifecycle.


432-457: LGTM!

Clean component composition with conditional rendering. The modular structure significantly improves readability over the previous monolithic approach.

@Naren456 Naren456 changed the title Redesign chat screen Redesign ChatScreen Dec 31, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (3)

8-17: Consider adding fadeAnim to the dependency array.

Although fadeAnim is stable and won't cause issues, including it in the dependency array satisfies the React Hooks exhaustive-deps rule and improves code clarity.

🔎 Suggested change
   useEffect(() => {
     const animation = Animated.loop(
       Animated.sequence([
         Animated.timing(fadeAnim, { toValue: 1, duration: 500, useNativeDriver: true }),
         Animated.timing(fadeAnim, { toValue: 0, duration: 500, useNativeDriver: true }),
       ])
     );
     animation.start();
     return () => animation.stop();
-  }, []);
+  }, [fadeAnim]);

19-36: Add accessibility support for screen readers.

The component currently lacks accessibility labels. Screen reader users should be informed when the bot is processing a response.

🔎 Suggested enhancement
   return (
-    <View style={styles.container}>
+    <View 
+      style={styles.container}
+      accessibilityRole="status"
+      accessibilityLabel="BabyNest is thinking"
+      accessibilityLiveRegion="polite"
+    >
        {/* Bot Avatar - Matching MessageList */}
        <View style={styles.botAvatar}>
          <Icon name="smart-toy" size={20} color="rgb(218,79,122)" /> 
        </View>

30-32: Consider staggered dot animation for a more conventional typing indicator.

All three dots currently pulse in perfect sync. Most typing indicators use a staggered "wave" effect where each dot starts its animation slightly after the previous one, creating a more dynamic visual cue.

🔎 Example implementation with staggered timing
 export default function TypingIndicator() {
-  const fadeAnim = React.useRef(new Animated.Value(0)).current;
+  const fadeAnim1 = React.useRef(new Animated.Value(0)).current;
+  const fadeAnim2 = React.useRef(new Animated.Value(0)).current;
+  const fadeAnim3 = React.useRef(new Animated.Value(0)).current;

   useEffect(() => {
+    const createDotAnimation = (anim, delay) => 
+      Animated.sequence([
+        Animated.delay(delay),
+        Animated.timing(anim, { toValue: 1, duration: 400, useNativeDriver: true }),
+        Animated.timing(anim, { toValue: 0, duration: 400, useNativeDriver: true }),
+      ]);
+
-    const animation = Animated.loop(
-      Animated.sequence([
-        Animated.timing(fadeAnim, { toValue: 1, duration: 500, useNativeDriver: true }),
-        Animated.timing(fadeAnim, { toValue: 0, duration: 500, useNativeDriver: true }),
-      ])
-    );
-    animation.start();
-    return () => animation.stop();
+    const animations = [
+      Animated.loop(createDotAnimation(fadeAnim1, 0)),
+      Animated.loop(createDotAnimation(fadeAnim2, 200)),
+      Animated.loop(createDotAnimation(fadeAnim3, 400)),
+    ];
+    
+    animations.forEach(anim => anim.start());
+    return () => animations.forEach(anim => anim.stop());
-  }, []);
+  }, [fadeAnim1, fadeAnim2, fadeAnim3]);

   // In the render:
-            <Animated.View style={[styles.dot, { opacity: fadeAnim  }]} />
-            <Animated.View style={[styles.dot, { opacity: fadeAnim  }]} />
-            <Animated.View style={[styles.dot, { opacity: fadeAnim  }]} />
+            <Animated.View style={[styles.dot, { opacity: fadeAnim1 }]} />
+            <Animated.View style={[styles.dot, { opacity: fadeAnim2 }]} />
+            <Animated.View style={[styles.dot, { opacity: fadeAnim3 }]} />
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f3c925 and fdcc64e.

📒 Files selected for processing (1)
  • Frontend/src/Components/ChatScreen/TypingIndicator.jsx
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (2)
Frontend/src/Components/ChatScreen/MessageList.jsx (1)
  • styles (53-100)
Frontend/src/Screens/ChatScreen.jsx (1)
  • styles (487-509)
🔇 Additional comments (1)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)

6-6: Previous critical issue resolved!

The Animated.Value is now correctly wrapped in useRef, ensuring it persists across renders and preventing the memory leak flagged in the earlier review.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
Frontend/src/Screens/ChatScreen.jsx (2)

235-236: Remove commented-out console.error.

Dead code should be removed to keep the codebase clean.

🔎 Proposed fix
-      // Additional debugging for undefined errors
-      if (!result) {
-        // console.error('❌ RESULT IS NULL/UNDEFINED!');
-      }

357-365: Consider adding explicit breaks or comments for case fallthrough.

Lines 357-365 use intentional case fallthrough where multiple cases share the same reply options. While this works correctly in JavaScript, adding a comment like // Intentional fallthrough or restructuring with explicit breaks would improve clarity.

🔎 Suggested improvements

Option 1: Add comments

        case 'time': 
        case 'appointment_time':
+          // Both time-related fields use the same suggestions
          replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM');
          break;
        
        case 'name':
        case 'medicine_name':
+          // Both name-related fields use the same medicine suggestions
          replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium');
          break;

Option 2: Group cases more explicitly

-        case 'time': 
-        case 'appointment_time':
-          replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM');
-          break;
-        
-        case 'name':
-        case 'medicine_name':
-          replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium');
-          break;
+        case 'time':
+        case 'appointment_time': {
+          replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM');
+          break;
+        }
+        
+        case 'name':
+        case 'medicine_name': {
+          replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium');
+          break;
+        }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fdcc64e and 69705e8.

📒 Files selected for processing (3)
  • Frontend/src/Components/ChatScreen/ChatHeader.jsx
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/Screens/ChatScreen.jsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • Frontend/src/Components/ChatScreen/ChatHeader.jsx
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/Components/ChatScreen/ChatInput.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (5)
  • handlePaste (335-338)
  • useRAGMode (60-60)
  • userInput (59-59)
  • handleSendMessage (166-329)
  • isGenerating (57-57)
🔇 Additional comments (20)
Frontend/src/Components/ChatScreen/ChatInput.jsx (4)

5-13: LGTM! Well-structured component props.

The props are clearly defined and cover all necessary inputs for the chat input functionality. The separation of concerns between state management, event handlers, and mode flags is clean.


15-18: LGTM! Platform-specific keyboard handling.

The KeyboardAvoidingView configuration correctly handles the platform differences between iOS and Android, with appropriate offsets for the iOS keyboard.


26-37: LGTM! Consistent conditional logic across input states.

The editable, placeholder, and opacity logic correctly reflect the two modes:

  • RAG mode: always editable (doesn't require local model)
  • Model mode: editable only when isModelReady

This provides clear visual feedback to users about input availability.


39-48: LGTM! Conditional send button with proper disabled state.

The send button appears only when there's user input, and the disabled logic correctly prevents sending when generating or when the model isn't ready (in model mode). This matches the editable logic on the TextInput for consistency.

Frontend/src/Screens/ChatScreen.jsx (16)

1-1: LGTM! Clean modularization with AsyncStorage persistence.

The addition of AsyncStorage and the new component imports reflect the PR's goal of splitting ChatScreen into reusable modules. This improves maintainability and testability.

Also applies to: 17-23


26-47: LGTM! Well-structured reducer for model lifecycle.

The reducer pattern is appropriate for managing the complex model initialization state (downloading, progress, ready). The actions cover all necessary state transitions, and the initial state is properly defined.


67-85: LGTM! Comprehensive conversation clearing.

The clearConversation function properly clears all three data stores (state, AsyncStorage, conversationContext) and includes a user confirmation dialog. This ensures data consistency across the application.


87-111: LGTM! Robust chat history persistence with context hydration.

The loadChats effect properly validates the stored data format and hydrates the conversationContext on mount. The defensive checks for Array validation and message properties prevent crashes from corrupted storage data.


114-120: LGTM! Clean persistence helper.

The saveChats helper provides a centralized way to persist conversation history, with appropriate error logging. This is used consistently throughout the component after user messages and bot responses.


122-162: LGTM! Safe model initialization with cleanup.

The model initialization effect properly uses an isMounted flag to prevent state updates after component unmount, and dispatches appropriate reducer actions throughout the initialization lifecycle. The progress callback integration is well-structured.


166-174: LGTM! Smart content parameter handling.

The function signature accepting an optional content parameter (for programmatic sends like quick replies) is well-designed. The type check on Lines 167-168 elegantly handles both string content and event objects from UI presses, preventing bugs from accidental event object usage.


185-197: LGTM! Proper state and persistence flow.

The message handling correctly:

  1. Creates the user message with a unique ID
  2. Updates conversation state
  3. Persists to AsyncStorage immediately
  4. Clears the input
  5. Updates the conversationContext for memory

This ensures data consistency across all three storage locations.


212-230: LGTM! Clear mode branching logic.

The conditional logic cleanly separates RAG mode (with follow-up handling) from Model mode (backend model), with appropriate logging. The Model mode properly constructs a result object matching the expected structure.


239-283: LGTM! Comprehensive result handling with multiple action types.

The result handling properly manages follow-up context, navigation commands, logout flows, emergency actions, and context refresh. The safety checks for required fields (e.g., Line 243) prevent errors from malformed results.


311-322: LGTM! Complete bot response handling.

The bot message is properly created, added to conversation state, persisted to AsyncStorage, added to conversationContext, and triggers auto-scroll. The async operations are well-sequenced.


335-338: LGTM! Paste handler correctly updates input.

The handlePaste function retrieves clipboard text and calls handleInputChange to update the input field. This allows users to paste text without immediately sending it, giving them a chance to review or edit before sending.


346-436: LGTM! Comprehensive quick reply generation.

The getQuickReplies function provides contextual suggestions based on missing fields from pending follow-ups. The switch statement covers a wide range of fields with appropriate suggestions. The deduplication via Set (Line 435) and limiting to 4 items provides a clean UX.


447-471: LGTM! Clean component composition.

The main render elegantly composes the new modular components:

  • ChatHeader for navigation and controls
  • Conditional EmptyState or MessageList based on conversation state
  • TypingIndicator integrated as MessageList footer for smooth scrolling

This achieves the PR's goal of modularization while maintaining a clean component hierarchy.


488-496: LGTM! ChatInput properly wired with all necessary props.

The ChatInput component receives all required props for state management, event handling, and mode detection. The use of handleInputChange as a wrapper (Line 490) maintains flexibility for future input transformation logic.


284-309: The fallback is reachable when processFollowUpResponse() returns null.

The else block executes when result && typeof result === 'object' is false. This occurs when conversationContext.processFollowUpResponse() returns null (line 56 in ConversationContext.js) if there's no pending follow-up. While RAG query paths and Model mode always return objects, the follow-up handler's null return creates this fallback scenario.

Consider clarifying the condition or adding a comment explaining when this null return is expected, since the current logic may be confusing to maintainers.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
Frontend/src/model/model.jsx (1)

115-115: Inconsistent return value when memory check fails.

When checkMemoryBeforeLoading returns false, this line returns undefined instead of false. This inconsistency can cause issues for callers expecting a boolean.

🔎 Proposed fix
-    if (!(await checkMemoryBeforeLoading(destPath))) return;
+    if (!(await checkMemoryBeforeLoading(destPath))) return false;
Frontend/src/theme/ThemeContext.js (1)

36-36: Pre-existing typo: missing # prefix in dark theme text color.

The text property is missing the # prefix, which will result in an invalid color value.

🔎 Proposed fix
-    text: 'fff3b0', 
+    text: '#fff3b0', 
♻️ Duplicate comments (1)
Frontend/src/Screens/ChatScreen.jsx (1)

284-296: Scroll-to-bottom feature is now properly wired.

The onScrollPositionChange prop is correctly passed to MessageList, addressing the incomplete feature flagged in the previous review. The scroll position detection in MessageList.jsx (lines 49-59) will now properly show/hide the scroll-to-bottom button.

🧹 Nitpick comments (7)
Frontend/src/Components/ChatScreen/MessageList.jsx (4)

7-13: Unused theme dependency in useMemo.

The markdownStyles object uses hardcoded color values (#000000, rgb(218,79,122)) rather than values from the theme prop. Either remove theme from the dependency array or use theme colors for consistency.

🔎 Suggested fix
-  const markdownStyles = useMemo(() => ({
-    body: { color: "#000000", fontSize: 16, lineHeight: 24 },
-    heading1: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 },
-    heading2: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 },
-    strong: { fontWeight: "bold", color: "rgb(218,79,122)" },
-    list_item: { marginVertical: 5 },
-  }), [theme]);
+  const markdownStyles = useMemo(() => ({
+    body: { color: theme?.text || "#000000", fontSize: 16, lineHeight: 24 },
+    heading1: { color: theme?.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 },
+    heading2: { color: theme?.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 },
+    strong: { fontWeight: "bold", color: theme?.primary || "rgb(218,79,122)" },
+    list_item: { marginVertical: 5 },
+  }), [theme]);

20-43: Consider memoizing renderItem to reduce re-renders.

The inline renderItem function is recreated on every render. For chat lists that may grow large, extracting and memoizing this with useCallback can improve scroll performance.


44-44: Verify item.id is always defined to prevent key warnings.

If any message lacks an id, React will produce key warnings or undefined behavior. Consider a fallback:

-      keyExtractor={(item) => item.id}
+      keyExtractor={(item, index) => item.id ?? `msg-${index}`}

46-47: Redundant scroll-to-end triggers.

Both onContentSizeChange and onLayout call scrollToEnd. Typically, onContentSizeChange alone suffices for auto-scrolling when new messages arrive. Triggering on both may cause double-scrolling or slight jank on initial layout.

Frontend/src/hooks/useChatEngine.js (1)

136-136: Potential ID collision between user and bot messages.

Using Date.now() + 1 for the bot message ID could collide if processing is very fast. Consider using a UUID or a more robust ID generation strategy.

Frontend/src/Screens/ChatScreen.jsx (2)

1-1: Unused import: AsyncStorage.

AsyncStorage is imported but not used in this file. Chat persistence is handled in useChatEngine.js.

🔎 Proposed fix
-import AsyncStorage from '@react-native-async-storage/async-storage';

108-114: Consider adding initModel to useEffect dependencies or suppressing the lint warning.

The effect calls initModel but has an empty dependency array. While the guard on line 85 prevents re-entry, ESLint will likely warn about missing dependencies. Either add the dependency or use an eslint-disable comment with justification.

 useEffect(() => {
   initModel();
-  // Cleanup on unmount
   return () => {
     unloadModel();
   };
+  // eslint-disable-next-line react-hooks/exhaustive-deps
+  // intentionally run only on mount; initModel has its own re-entry guard
 }, []);
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69705e8 and b196e0a.

📒 Files selected for processing (6)
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/Components/ChatScreen/MessageList.jsx
  • Frontend/src/Screens/ChatScreen.jsx
  • Frontend/src/hooks/useChatEngine.js
  • Frontend/src/model/model.jsx
  • Frontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (4)
Frontend/src/hooks/useChatEngine.js (1)
Frontend/src/model/model.jsx (6)
  • context (8-8)
  • response (42-42)
  • response (201-201)
  • result (193-197)
  • generateResponse (143-208)
  • generateResponse (143-208)
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (5)
  • handlePaste (160-163)
  • useRAGMode (60-60)
  • modelError (56-56)
  • userInput (59-59)
  • handleSendMessage (118-154)
Frontend/src/hooks/useChatEngine.js (1)
  • isGenerating (11-11)
Frontend/src/Screens/ChatScreen.jsx (6)
Frontend/src/hooks/useChatEngine.js (6)
  • useChatEngine (9-159)
  • useChatEngine (9-159)
  • clearConversation (46-50)
  • sendMessage (52-151)
  • conversation (10-10)
  • isGenerating (11-11)
Frontend/src/model/model.jsx (5)
  • context (8-8)
  • downloadModel (70-100)
  • downloadModel (70-100)
  • unloadModel (130-141)
  • unloadModel (130-141)
Frontend/src/Components/ChatScreen/ChatInput.jsx (1)
  • ChatInput (5-64)
Frontend/src/Components/ChatScreen/MessageList.jsx (1)
  • MessageList (6-63)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
  • ChatHeader (5-41)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
  • TypingIndicator (5-37)
Frontend/src/model/model.jsx (1)
Frontend/src/context/AgentContext.jsx (3)
  • error (29-29)
  • context (19-19)
  • context (27-27)
🔇 Additional comments (9)
Frontend/src/model/model.jsx (2)

130-141: Good addition of unloadModel for resource cleanup.

The new unloadModel function properly releases the llama context and handles errors gracefully. This aligns with the cleanup pattern in ChatScreen.jsx's unmount effect.


191-199: Timing instrumentation is helpful for performance monitoring.

The added logging for inference duration provides useful debugging and performance tracking information.

Frontend/src/theme/ThemeContext.js (1)

8-18: Theme color updates and default theme switch look good.

The color palette updates align with the redesigned UI components, and switching the initial theme to default is consistent with the PR objectives.

Also applies to: 63-63

Frontend/src/Components/ChatScreen/ChatInput.jsx (2)

23-25: Accessibility labels are well implemented.

Good use of accessibilityLabel on interactive elements for screen reader support.

Also applies to: 55-56


27-46: Error retry flow is well structured.

The conditional rendering for model error state with a retry action provides good UX for recovery from failed model loading.

Frontend/src/hooks/useChatEngine.js (2)

46-50: clearConversation is properly memoized.

Good use of useCallback for the clear function that's passed as a prop.


77-78: The concern is already addressed in the code. The ragService.initialize() method includes a guard clause (if (this.isInitialized) return;) that ensures it returns immediately on subsequent calls. While calling it on every message send might appear wasteful, it has negligible overhead due to the early return, and no side effects occur after the first initialization.

Likely an incorrect or invalid review comment.

Frontend/src/Screens/ChatScreen.jsx (2)

25-48: Clean reducer pattern for model state management.

The modelReducer with typed actions (START_INIT, SET_DOWNLOADING, SET_PROGRESS, INIT_SUCCESS, INIT_FAILURE) provides clear state transitions and is easier to debug than multiple useState calls. The initModel callback properly handles the download flow and error cases.

Also applies to: 55-56, 83-106


117-154: Well-structured separation of concerns for message handling.

The handleSendMessage function correctly delegates core logic to the useChatEngine hook while keeping UI-specific side effects (navigation, logout, emergency routing) in the screen component. This maintains a clean separation between business logic and presentation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Frontend/src/model/model.jsx (1)

200-207: Missing cache write for successful responses.

The generateResponse function reads from cache via getCache(cacheKey) but never calls setCache to store new responses. This means the caching mechanism is ineffective—responses are never cached.

🔎 Proposed fix to cache successful responses
       const response = result?.text?.trim();
+      
+      if (response && cacheKey) {
+        setCache(cacheKey, response);
+      }
+      
       return response;
🧹 Nitpick comments (4)
Frontend/src/model/model.jsx (1)

191-199: Consider removing or gating timing logs in production.

The inference timing instrumentation is useful for development but may add noise in production logs. Consider gating these logs behind a __DEV__ check or a debug flag.

-      console.log("Starting inference...");
-      const startTime = Date.now();
+      if (__DEV__) console.log("Starting inference...");
+      const startTime = __DEV__ ? Date.now() : 0;
       const result = await context.completion({
         messages: messagesToSend,
         n_predict: 500,
         stop: stopWords
       });
-      const endTime = Date.now();
-      console.log(`⚡ Inference Time: ${endTime - startTime}ms`);
+      if (__DEV__) {
+        const endTime = Date.now();
+        console.log(`⚡ Inference Time: ${endTime - startTime}ms`);
+      }
Frontend/src/hooks/useChatEngine.js (3)

70-75: Stale closure issue mitigated but currentConversation timing is fragile.

The functional update pattern is used, but currentConversation is assigned inside the state updater and used outside. While React guarantees synchronous execution of the updater function, relying on this side-effect assignment is fragile and may confuse future maintainers.

A cleaner approach would be to use a ref to track the latest conversation:

const conversationRef = useRef(conversation);
useEffect(() => { conversationRef.current = conversation; }, [conversation]);
// Then use conversationRef.current when needed

This is minor since the current code works, but worth noting for maintainability.


9-11: Unused parameters context and refreshContext should be documented or removed.

The hook accepts refreshContext as a parameter but never uses it. The context parameter is only passed to conversationContext.setUserContext(). Consider documenting their intended purpose or removing unused parameters.


81-82: initialize() is idempotent and already optimized.

While ragService.initialize() is called on every sendMessage, the implementation includes an if (this.isInitialized) return; guard (line 20 of RAGService.js) that makes subsequent calls instant no-ops. The expensive operation (loadIntentEmbeddings()) runs only once. However, the explicit call here is functionally redundant since processQuery() already includes the same initialization fallback (RAGService.js, line 635). Consider removing the redundant call on line 81 or deferring initialization to the first RAG operation to reduce unnecessary async invocations.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b196e0a and 18c9444.

📒 Files selected for processing (4)
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/hooks/useChatEngine.js
  • Frontend/src/model/model.jsx
  • Frontend/src/theme/ThemeContext.js
🚧 Files skipped from review as they are similar to previous changes (2)
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/hooks/useChatEngine.js (1)
Frontend/src/model/model.jsx (8)
  • context (8-8)
  • response (42-42)
  • response (201-201)
  • result (193-197)
  • startTime (192-192)
  • generateResponse (143-208)
  • generateResponse (143-208)
  • endTime (198-198)
🔇 Additional comments (4)
Frontend/src/model/model.jsx (2)

70-100: Approve changes with a note on potential race condition.

The refactored downloadModel now correctly attempts to load the model after download and returns null on failure, providing better error feedback. However, there's a potential race condition: if downloadModel is called twice concurrently with the same fileName, both calls could attempt to load the model simultaneously since there's no mutex/lock.

For a production app, consider adding a download-in-progress guard (e.g., a Map tracking active downloads) to prevent concurrent downloads of the same file.


130-141: LGTM - Clean model cleanup implementation.

The unloadModel function properly releases the llama context, resets the module-level context variable, and handles errors gracefully. This integrates well with the chat screen unmount lifecycle.

Frontend/src/hooks/useChatEngine.js (2)

116-141: Good: Backend fallback with timeout properly implemented.

The AbortController with 15s timeout is correctly implemented, addressing the previous review concern. The timeout is cleared in both success and error paths. The fallback to local model on backend failure provides resilience.


52-62: Silent failure when context initialization fails.

When initializeContext() fails, only a warning is logged but execution continues. This may lead to confusing behavior if the context is required for subsequent operations. Consider either:

  1. Returning early with an error message to the user
  2. Or documenting why it's acceptable to continue without context

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
Frontend/src/model/model.jsx (2)

144-217: Enhanced robustness and performance.

The changes add input validation, timing instrumentation, and response caching. The reduction of n_predict from 500 to 256 will improve response latency but may truncate longer responses. Ensure this tradeoff aligns with product requirements.

Note: Monitor if users encounter truncated responses in practice. You may want to make n_predict configurable based on query complexity or user preferences.


117-122: Consider platform-specific configuration for GPU and thread settings.

The model initialization hardcodes n_gpu_layers: 0 and n_threads: 2 for emulator stability. Production devices should benefit from GPU acceleration and additional threads. Implement device detection (DeviceInfo is already imported) to apply higher-performance settings on real devices, or move these values to environment-based configuration.

Frontend/src/Screens/ChatScreen.jsx (1)

174-264: Comprehensive quick reply logic.

The function covers a wide range of field types and provides helpful suggestions. The deduplication and limiting to 4 replies prevents UI clutter.

Consider extracting this to a separate utility file (e.g., utils/quickReplySuggestions.js) to keep the screen component focused on rendering and event handling. This would also make it easier to test and extend.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 18c9444 and 628cd27.

📒 Files selected for processing (5)
  • Frontend/src/Components/ChatScreen/ChatInput.jsx
  • Frontend/src/Components/ChatScreen/MessageList.jsx
  • Frontend/src/Screens/ChatScreen.jsx
  • Frontend/src/hooks/useChatEngine.js
  • Frontend/src/model/model.jsx
🧰 Additional context used
🧬 Code graph analysis (4)
Frontend/src/hooks/useChatEngine.js (4)
Frontend/src/context/AgentContext.jsx (3)
  • isInitialized (31-31)
  • refreshContext (68-94)
  • error (29-29)
Frontend/src/model/model.jsx (8)
  • context (8-8)
  • response (42-42)
  • response (204-204)
  • result (194-198)
  • startTime (193-193)
  • generateResponse (144-217)
  • generateResponse (144-217)
  • endTime (200-200)
Frontend/src/theme/ThemeContext.js (1)
  • context (79-79)
Frontend/src/Screens/ChatScreen.jsx (2)
  • clearConversation (72-84)
  • useRAGMode (60-60)
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (5)
  • handlePaste (163-166)
  • useRAGMode (60-60)
  • modelError (56-56)
  • userInput (59-59)
  • handleSendMessage (121-157)
Frontend/src/hooks/useChatEngine.js (1)
  • isGenerating (11-11)
Frontend/src/Screens/ChatScreen.jsx (3)
Frontend/src/hooks/useChatEngine.js (6)
  • useChatEngine (9-201)
  • useChatEngine (9-201)
  • clearConversation (49-54)
  • sendMessage (56-193)
  • conversation (10-10)
  • isGenerating (11-11)
Frontend/src/model/model.jsx (5)
  • context (8-8)
  • downloadModel (70-100)
  • downloadModel (70-100)
  • unloadModel (131-142)
  • unloadModel (131-142)
Frontend/src/services/ConversationContext.js (2)
  • conversationContext (384-384)
  • conversationContext (384-384)
Frontend/src/Components/ChatScreen/MessageList.jsx (3)
Frontend/src/theme/ThemeContext.js (1)
  • theme (63-63)
Frontend/src/hooks/useChatEngine.js (1)
  • conversation (10-10)
Frontend/src/Screens/ChatScreen.jsx (1)
  • flatListRef (62-62)
🔇 Additional comments (18)
Frontend/src/Components/ChatScreen/MessageList.jsx (3)

15-23: LGTM!

The auto-scroll implementation correctly tracks conversation length changes and only scrolls when new messages are added.


30-65: Well-structured message rendering.

The differentiation between user and bot messages is clear, with appropriate styling and the copy-on-long-press feature. Good use of optional chaining for the onCopyMessage callback.


69-78: LGTM!

The scroll position tracking correctly detects when the user has scrolled away from the bottom, enabling the scroll-to-bottom button feature. The throttle value of 16ms is appropriate for smooth 60fps scrolling.

Frontend/src/model/model.jsx (2)

74-99: Improved error handling in download flow.

The changes ensure that loadModel is called after download and properly validates the result before returning. This prevents returning a path to a model that failed to load. The consistent null returns on failure improve error handling predictability.


131-142: LGTM!

The new unloadModel function properly handles cleanup with defensive checks and error logging. This supports the new model lifecycle management introduced in the PR.

Frontend/src/Components/ChatScreen/ChatInput.jsx (3)

5-15: Well-designed component interface.

The props provide clear separation between input state, generation state, and error handling, making the component easy to test and maintain.


31-53: LGTM!

The conditional rendering properly handles model error states and provides clear user feedback. The editable prop correctly restricts input when the model isn't ready in local mode.


55-67: LGTM!

The send button correctly uses optional chaining (addressing the previous review comment) and has appropriate disabled states to prevent issues during generation or when the model isn't ready.

Frontend/src/hooks/useChatEngine.js (5)

15-36: LGTM!

The chat history loading properly validates data before setting state and hydrating the conversation context. The empty dependency array ensures it only runs on mount.


39-54: LGTM!

The saveChats function is now properly memoized (addressing the previous review comment), and clearConversation includes a cancellation mechanism to prevent stale responses from appearing after clearing.


56-91: LGTM!

The message ID generation now uses a robust combination of timestamp and random string (addressing the previous collision concern), and the functional state update ensures the latest conversation state is used (fixing the stale closure issue).


138-161: LGTM!

The backend fallback now includes proper timeout handling with AbortController (addressing the previous review comment), preventing indefinite hangs on network issues.


164-193: Cancellation guard prevents stale responses.

The cancellation mechanism properly prevents responses from appearing after the conversation is cleared. The inclusion of conversation in the useCallback dependencies (Line 193) ensures the function always has the current state, though this means the function is recreated on every message. This is a reasonable tradeoff for correctness over micro-optimization.

Frontend/src/Screens/ChatScreen.jsx (5)

25-48: LGTM!

The model state reducer provides clean, predictable state management for the model lifecycle. All necessary state transitions are covered.


55-69: Clean hook integration.

The chat engine hook is properly integrated, and wrapping baseClearConversation with a confirmation dialog provides good UX.


86-117: LGTM!

The model initialization correctly prevents duplicate downloads and properly cleans up on unmount with unloadModel. The empty dependency array for the effect is appropriate since initialization should only happen once.


121-157: LGTM!

The message sending flow correctly delegates core logic to the hook and handles UI-specific side effects (navigation, logout, emergency) in the screen component. Good separation of concerns.


274-330: LGTM!

The component composition is clean and well-organized. The scroll-to-bottom button is now properly wired with the onScrollPositionChange callback (addressing the previous review comment), and all modular components are correctly integrated.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@Naren456
Copy link
Contributor Author

Naren456 commented Jan 2, 2026

@bhavik-mangla
I’ve made some changes related to the ChatScreen.
Please review the PR and let me know if you have any suggestions or improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant