-
Notifications
You must be signed in to change notification settings - Fork 107
Redesign ChatScreen #150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Redesign ChatScreen #150
Conversation
📝 WalkthroughWalkthroughIntroduces a componentized chat flow with a model lifecycle (download/init/unload), a new useChatEngine hook for persistence and multi-path message generation (RAG, local model, backend fallback), six new chat UI components, theme color adjustments, and HeaderWithBack/ProfileScreen UI tweaks. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as ChatScreen UI
participant Hook as useChatEngine
participant Model as Local Model
participant RAG as RAG Service
participant Backend as Backend /agent
participant Storage as AsyncStorage
Note over UI,Model: App mount / init
UI->>Model: downloadModel()/loadModel()
Model-->>UI: progress / init result
UI->>Storage: load chat_history
Storage-->>Hook: conversation data
Hook-->>UI: hydrated conversation
User->>UI: sendMessage(text)
UI->>Hook: sendMessage(text, useRAGMode, initializeContext)
Hook-->>UI: append user message, set isGenerating
alt use RAG mode
Hook->>RAG: query with conversationContext
alt RAG handled
RAG-->>Hook: assistant response
else not handled
Hook->>Backend: POST /agent
Backend-->>Hook: agent response
end
else local model mode
Hook->>Model: generateResponse(context)
alt model produced
Model-->>Hook: assistant response
else model error
Hook->>Backend: POST /agent (fallback)
Backend-->>Hook: agent response
end
end
Hook->>Storage: persist updated chat_history
Hook-->>UI: assistant message (render)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
Frontend/src/theme/ThemeContext.js (1)
62-64: Fix theme initialization to use the updated default theme.The default theme colors were updated (lines 7-19), but the ThemeProvider still initializes with
themes.lighton Line 63. This means the new default theme will never be applied unless users explicitly switch to it. This appears to be an oversight.🔎 Proposed fix
export const ThemeProvider = ({ children }) => { - const [theme, setTheme] = useState(themes.light); + const [theme, setTheme] = useState(themes.default);Frontend/src/Screens/ChatScreen.jsx (2)
270-280: Bug: UsinguserInputinstead oftextToSendin fallback agent call.The fallback agent request uses
userInput(line 277) which was already cleared at line 176. This will send an empty string to the backend agent.🔎 Proposed fix
const agentResponse = await fetch(`${BASE_URL}/agent`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ - query: userInput, + query: textToSend, user_id: "default" }), });
380-382: Duplicatemedicine_namecase - unreachable code.This case is already handled at lines 346-349. The duplicate here will never execute.
🔎 Proposed fix - remove duplicate
case 'action_type': replies.push('Last', 'Weight', 'Appointment', 'Sleep'); break; - case 'medicine_name': - replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium'); - break; case 'frequency':
🧹 Nitpick comments (9)
Frontend/src/Components/HeaderWithBack.jsx (1)
14-17: Consider using flex for cleaner centering (optional).The spacer view with fixed width works, but you could simplify the layout by removing the spacer and using
flex: 1on an empty View before the title, or by adjusting the title's flex styling to center it naturally within the available space.Alternative approach using flex
- - <Text style={styles.title}>{title}</Text> - - <View style={{ width: 40 }} /> + <Text style={styles.title}>{title}</Text> + <View style={{ width: 24 }} /> {/* Match back button visual width */}This maintains visual balance without relying on a magic number, or alternatively, use absolute positioning for the title.
Frontend/src/Components/ChatScreen/QuickReplies.jsx (1)
11-21: Prefer stable keys over array indices.Using array indices as keys (Line 13) can cause issues if the
repliesarray is reordered or items are added/removed. If replies have unique text content or IDs, use those instead.🔎 Suggested improvement
If reply text is unique, use it as the key:
- {replies.map((reply, index) => ( + {replies.map((reply) => ( <TouchableOpacity - key={index} + key={reply} style={styles.quickReplyButton} onPress={() => handleQuickReply(reply)} >If replies are objects with IDs, use
reply.idinstead.Frontend/src/Components/ChatScreen/MessageList.jsx (1)
7-13: Unusedthemeparameter increateMarkdownStyles.The function accepts
themebut returns hardcoded colors. Either use theme values for consistency with the app's theming system, or remove the unused parameter.🔎 Proposed fix
- const createMarkdownStyles = (theme) => ({ - body: { color: "#000000", fontSize: 16, lineHeight: 24 }, - heading1: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 }, - heading2: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 }, - strong: { fontWeight: "bold", color: "rgb(218,79,122)" }, + const createMarkdownStyles = () => ({ + body: { color: "#000000", fontSize: 16, lineHeight: 24 }, + heading1: { color: theme.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 }, + heading2: { color: theme.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 }, + strong: { fontWeight: "bold", color: theme.primary || "rgb(218,79,122)" }, list_item: { marginVertical: 5 }, });Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
52-58: Remove unusedemptyStateLogostyle.This style is defined but never referenced in the component.
🔎 Proposed fix
const styles = StyleSheet.create({ emptyStateContainer: { flexGrow: 1, justifyContent: 'center', alignItems: 'center', paddingHorizontal: 20, paddingBottom: 50, }, - emptyStateLogo: { - width: 60, - height: 60, - marginBottom: 20, - tintColor: 'rgb(218,79,122)', - opacity: 0.8, - }, emptyStateTitle: {Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
39-47: DuplicatemarginLefton send button.Line 41 adds inline
marginLeft: 8but thesendButtonstyle (line 91) already hasmarginLeft: 5. The inline style overrides it, making the style definition dead code.🔎 Proposed fix - remove inline marginLeft
<TouchableOpacity - style={[styles.sendButton, { marginLeft: 8 }, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} + style={[styles.sendButton, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} onPress={handleSendMessage} disabled={isGenerating || (!useRAGMode && !isModelReady)} >Then update the style if 8px is preferred:
sendButton: { backgroundColor: "rgb(218,79,122)", padding: 8, borderRadius: 20, - marginLeft: 5, + marginLeft: 8, },
21-23: Consider adding accessibility labels.The paste and send buttons lack
accessibilityLabelprops, which would improve screen reader support.🔎 Example
- <TouchableOpacity style={styles.iconButton} onPress={handlePaste}> + <TouchableOpacity style={styles.iconButton} onPress={handlePaste} accessibilityLabel="Paste from clipboard"> <Icon name="content-paste" size={20} color="rgba(0,0,0,0.4)"/> </TouchableOpacity><TouchableOpacity style={[styles.sendButton, { marginLeft: 8 }, (!useRAGMode && !isModelReady) && { opacity: 0.5 }]} onPress={handleSendMessage} disabled={isGenerating || (!useRAGMode && !isModelReady)} + accessibilityLabel="Send message" >Also applies to: 40-46
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
28-32: Minor inconsistency in disabled state logic.Line 29 uses truthy check (
conversationLength ? 1 : 0.5) while line 31 uses strict equality (conversationLength === 0). IfconversationLengthwereundefined, the button would appear disabled (opacity 0.5) but remain clickable.🔎 Proposed fix for consistency
<TouchableOpacity - style={[styles.headerButton, { marginLeft: 8, opacity: conversationLength ? 1 : 0.5 }]} + style={[styles.headerButton, { marginLeft: 8, opacity: conversationLength > 0 ? 1 : 0.5 }]} onPress={clearConversation} - disabled={conversationLength === 0} + disabled={!conversationLength || conversationLength === 0} >Frontend/src/Screens/ChatScreen.jsx (2)
116-146: Consider adding cleanup for model initialization.If the component unmounts during model download,
dispatchcalls will attempt to update unmounted component state. Add a cleanup function to prevent this.🔎 Proposed fix
useEffect(() => { + let isMounted = true; + const initModel = async () => { try { - dispatch({ type: 'START_INIT' }); + if (isMounted) dispatch({ type: 'START_INIT' }); console.log(`Initializing model ${GGUF_FILE}...`); - dispatch({ type: 'SET_DOWNLOADING', payload: true }); - dispatch({ type: 'SET_PROGRESS', payload: 0 }); + if (isMounted) dispatch({ type: 'SET_DOWNLOADING', payload: true }); + if (isMounted) dispatch({ type: 'SET_PROGRESS', payload: 0 }); - const success = await downloadModel(GGUF_FILE, (p) => dispatch({ type: 'SET_PROGRESS', payload: p })); - dispatch({ type: 'SET_DOWNLOADING', payload: false }); + const success = await downloadModel(GGUF_FILE, (p) => { + if (isMounted) dispatch({ type: 'SET_PROGRESS', payload: p }); + }); + if (!isMounted) return; + dispatch({ type: 'SET_DOWNLOADING', payload: false }); if (success) { dispatch({ type: 'INIT_SUCCESS' }); } else { dispatch({ type: 'INIT_FAILURE' }); } } catch (error) { - Alert.alert("Error", "Failed to init model: " + error.message); + if (isMounted) Alert.alert("Error", "Failed to init model: " + error.message); console.error(error); - dispatch({ type: 'INIT_FAILURE' }); + if (isMounted) dispatch({ type: 'INIT_FAILURE' }); } }; initModel(); + + return () => { + isMounted = false; + }; }, []);
87-105: Consider adding data validation when loading chat history.The
parsedChats.forEachassumes the stored data has the expected structure. Corrupted AsyncStorage data could cause runtime errors.🔎 Proposed fix
if (storedChats) { const parsedChats = JSON.parse(storedChats); + if (!Array.isArray(parsedChats)) { + console.warn('Invalid chat history format, resetting...'); + return; + } setConversation(parsedChats); // Hydrate conversation context for memory - parsedChats.forEach(msg => { - conversationContext.addMessage(msg.role, msg.content); + parsedChats.forEach((msg) => { + if (msg?.role && msg?.content) { + conversationContext.addMessage(msg.role, msg.content); + } }); }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
Frontend/src/Components/ChatScreen/ChatHeader.jsxFrontend/src/Components/ChatScreen/ChatInput.jsxFrontend/src/Components/ChatScreen/EmptyState.jsxFrontend/src/Components/ChatScreen/MessageList.jsxFrontend/src/Components/ChatScreen/QuickReplies.jsxFrontend/src/Components/ChatScreen/TypingIndicator.jsxFrontend/src/Components/HeaderWithBack.jsxFrontend/src/Screens/ChatScreen.jsxFrontend/src/Screens/ProfileScreen.jsxFrontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (6)
Frontend/src/Components/ChatScreen/MessageList.jsx (2)
Frontend/src/theme/ThemeContext.js (1)
theme(63-63)Frontend/src/Screens/ChatScreen.jsx (1)
flatListRef(62-62)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (2)
styles(487-509)navigation(50-50)Frontend/src/Components/HeaderWithBack.jsx (2)
styles(22-42)navigation(7-7)
Frontend/src/Components/ChatScreen/ChatInput.jsx (5)
Frontend/src/Screens/ChatScreen.jsx (4)
styles(487-509)handlePaste(319-322)useRAGMode(60-60)userInput(59-59)Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
styles(43-65)Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
styles(44-97)Frontend/src/Components/ChatScreen/MessageList.jsx (1)
styles(53-100)Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
styles(39-83)
Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (2)
useRAGMode(60-60)handleQuickReply(424-426)
Frontend/src/Components/ChatScreen/QuickReplies.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (1)
handleQuickReply(424-426)
Frontend/src/Components/HeaderWithBack.jsx (1)
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
styles(43-65)
🔇 Additional comments (7)
Frontend/src/Screens/ProfileScreen.jsx (1)
69-70: LGTM! Clean refactor to reusable header component.The adoption of
HeaderWithBacksimplifies the component structure and aligns with the broader UI refactor. The theme color updates are consistent with the changes inThemeContext.js.Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
30-32: Note: All dots pulse in unison.All three dots use the same
fadeAnimopacity value, so they pulse together rather than in a staggered sequence. If you intended a "wave" effect where dots animate one after another, you would need separate Animated.Values with delayed start times.This is likely intentional design, but flagging for awareness.
Frontend/src/Components/ChatScreen/MessageList.jsx (1)
44-48: LGTM with optional optimization note.The FlatList setup is correct. Both
onContentSizeChangeandonLayouttriggeringscrollToEndensures reliable auto-scroll but may cause redundant scroll calls. Consider debouncing or using a single trigger if you notice scroll jitter.Frontend/src/Components/ChatScreen/EmptyState.jsx (1)
5-41: LGTM!Clean implementation with mode-aware task switching. The component correctly handles both RAG and Model modes with appropriate UI feedback.
Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
5-41: LGTM!Well-structured header component with clean prop handling and appropriate visual feedback for disabled states.
Frontend/src/Screens/ChatScreen.jsx (2)
25-47: LGTM!Good use of
useReducerfor managing model initialization state. The reducer pattern cleanly handles the download/ready lifecycle.
432-457: LGTM!Clean component composition with conditional rendering. The modular structure significantly improves readability over the previous monolithic approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (3)
8-17: Consider adding fadeAnim to the dependency array.Although
fadeAnimis stable and won't cause issues, including it in the dependency array satisfies the React Hooks exhaustive-deps rule and improves code clarity.🔎 Suggested change
useEffect(() => { const animation = Animated.loop( Animated.sequence([ Animated.timing(fadeAnim, { toValue: 1, duration: 500, useNativeDriver: true }), Animated.timing(fadeAnim, { toValue: 0, duration: 500, useNativeDriver: true }), ]) ); animation.start(); return () => animation.stop(); - }, []); + }, [fadeAnim]);
19-36: Add accessibility support for screen readers.The component currently lacks accessibility labels. Screen reader users should be informed when the bot is processing a response.
🔎 Suggested enhancement
return ( - <View style={styles.container}> + <View + style={styles.container} + accessibilityRole="status" + accessibilityLabel="BabyNest is thinking" + accessibilityLiveRegion="polite" + > {/* Bot Avatar - Matching MessageList */} <View style={styles.botAvatar}> <Icon name="smart-toy" size={20} color="rgb(218,79,122)" /> </View>
30-32: Consider staggered dot animation for a more conventional typing indicator.All three dots currently pulse in perfect sync. Most typing indicators use a staggered "wave" effect where each dot starts its animation slightly after the previous one, creating a more dynamic visual cue.
🔎 Example implementation with staggered timing
export default function TypingIndicator() { - const fadeAnim = React.useRef(new Animated.Value(0)).current; + const fadeAnim1 = React.useRef(new Animated.Value(0)).current; + const fadeAnim2 = React.useRef(new Animated.Value(0)).current; + const fadeAnim3 = React.useRef(new Animated.Value(0)).current; useEffect(() => { + const createDotAnimation = (anim, delay) => + Animated.sequence([ + Animated.delay(delay), + Animated.timing(anim, { toValue: 1, duration: 400, useNativeDriver: true }), + Animated.timing(anim, { toValue: 0, duration: 400, useNativeDriver: true }), + ]); + - const animation = Animated.loop( - Animated.sequence([ - Animated.timing(fadeAnim, { toValue: 1, duration: 500, useNativeDriver: true }), - Animated.timing(fadeAnim, { toValue: 0, duration: 500, useNativeDriver: true }), - ]) - ); - animation.start(); - return () => animation.stop(); + const animations = [ + Animated.loop(createDotAnimation(fadeAnim1, 0)), + Animated.loop(createDotAnimation(fadeAnim2, 200)), + Animated.loop(createDotAnimation(fadeAnim3, 400)), + ]; + + animations.forEach(anim => anim.start()); + return () => animations.forEach(anim => anim.stop()); - }, []); + }, [fadeAnim1, fadeAnim2, fadeAnim3]); // In the render: - <Animated.View style={[styles.dot, { opacity: fadeAnim }]} /> - <Animated.View style={[styles.dot, { opacity: fadeAnim }]} /> - <Animated.View style={[styles.dot, { opacity: fadeAnim }]} /> + <Animated.View style={[styles.dot, { opacity: fadeAnim1 }]} /> + <Animated.View style={[styles.dot, { opacity: fadeAnim2 }]} /> + <Animated.View style={[styles.dot, { opacity: fadeAnim3 }]} />
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (2)
Frontend/src/Components/ChatScreen/MessageList.jsx (1)
styles(53-100)Frontend/src/Screens/ChatScreen.jsx (1)
styles(487-509)
🔇 Additional comments (1)
Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
6-6: Previous critical issue resolved!The Animated.Value is now correctly wrapped in
useRef, ensuring it persists across renders and preventing the memory leak flagged in the earlier review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
Frontend/src/Screens/ChatScreen.jsx (2)
235-236: Remove commented-out console.error.Dead code should be removed to keep the codebase clean.
🔎 Proposed fix
- // Additional debugging for undefined errors - if (!result) { - // console.error('❌ RESULT IS NULL/UNDEFINED!'); - }
357-365: Consider adding explicit breaks or comments for case fallthrough.Lines 357-365 use intentional case fallthrough where multiple cases share the same reply options. While this works correctly in JavaScript, adding a comment like
// Intentional fallthroughor restructuring with explicit breaks would improve clarity.🔎 Suggested improvements
Option 1: Add comments
case 'time': case 'appointment_time': + // Both time-related fields use the same suggestions replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM'); break; case 'name': case 'medicine_name': + // Both name-related fields use the same medicine suggestions replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium'); break;Option 2: Group cases more explicitly
- case 'time': - case 'appointment_time': - replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM'); - break; - - case 'name': - case 'medicine_name': - replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium'); - break; + case 'time': + case 'appointment_time': { + replies.push('Morning', 'Afternoon', 'Evening', '9:00 AM', '2:00 PM'); + break; + } + + case 'name': + case 'medicine_name': { + replies.push('Paracetamol', 'Iron', 'Folic Acid', 'Calcium'); + break; + }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
Frontend/src/Components/ChatScreen/ChatHeader.jsxFrontend/src/Components/ChatScreen/ChatInput.jsxFrontend/src/Screens/ChatScreen.jsx
🚧 Files skipped from review as they are similar to previous changes (1)
- Frontend/src/Components/ChatScreen/ChatHeader.jsx
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/Components/ChatScreen/ChatInput.jsx (1)
Frontend/src/Screens/ChatScreen.jsx (5)
handlePaste(335-338)useRAGMode(60-60)userInput(59-59)handleSendMessage(166-329)isGenerating(57-57)
🔇 Additional comments (20)
Frontend/src/Components/ChatScreen/ChatInput.jsx (4)
5-13: LGTM! Well-structured component props.The props are clearly defined and cover all necessary inputs for the chat input functionality. The separation of concerns between state management, event handlers, and mode flags is clean.
15-18: LGTM! Platform-specific keyboard handling.The KeyboardAvoidingView configuration correctly handles the platform differences between iOS and Android, with appropriate offsets for the iOS keyboard.
26-37: LGTM! Consistent conditional logic across input states.The editable, placeholder, and opacity logic correctly reflect the two modes:
- RAG mode: always editable (doesn't require local model)
- Model mode: editable only when
isModelReadyThis provides clear visual feedback to users about input availability.
39-48: LGTM! Conditional send button with proper disabled state.The send button appears only when there's user input, and the disabled logic correctly prevents sending when generating or when the model isn't ready (in model mode). This matches the editable logic on the TextInput for consistency.
Frontend/src/Screens/ChatScreen.jsx (16)
1-1: LGTM! Clean modularization with AsyncStorage persistence.The addition of AsyncStorage and the new component imports reflect the PR's goal of splitting ChatScreen into reusable modules. This improves maintainability and testability.
Also applies to: 17-23
26-47: LGTM! Well-structured reducer for model lifecycle.The reducer pattern is appropriate for managing the complex model initialization state (downloading, progress, ready). The actions cover all necessary state transitions, and the initial state is properly defined.
67-85: LGTM! Comprehensive conversation clearing.The clearConversation function properly clears all three data stores (state, AsyncStorage, conversationContext) and includes a user confirmation dialog. This ensures data consistency across the application.
87-111: LGTM! Robust chat history persistence with context hydration.The loadChats effect properly validates the stored data format and hydrates the conversationContext on mount. The defensive checks for Array validation and message properties prevent crashes from corrupted storage data.
114-120: LGTM! Clean persistence helper.The saveChats helper provides a centralized way to persist conversation history, with appropriate error logging. This is used consistently throughout the component after user messages and bot responses.
122-162: LGTM! Safe model initialization with cleanup.The model initialization effect properly uses an
isMountedflag to prevent state updates after component unmount, and dispatches appropriate reducer actions throughout the initialization lifecycle. The progress callback integration is well-structured.
166-174: LGTM! Smart content parameter handling.The function signature accepting an optional
contentparameter (for programmatic sends like quick replies) is well-designed. The type check on Lines 167-168 elegantly handles both string content and event objects from UI presses, preventing bugs from accidental event object usage.
185-197: LGTM! Proper state and persistence flow.The message handling correctly:
- Creates the user message with a unique ID
- Updates conversation state
- Persists to AsyncStorage immediately
- Clears the input
- Updates the conversationContext for memory
This ensures data consistency across all three storage locations.
212-230: LGTM! Clear mode branching logic.The conditional logic cleanly separates RAG mode (with follow-up handling) from Model mode (backend model), with appropriate logging. The Model mode properly constructs a result object matching the expected structure.
239-283: LGTM! Comprehensive result handling with multiple action types.The result handling properly manages follow-up context, navigation commands, logout flows, emergency actions, and context refresh. The safety checks for required fields (e.g., Line 243) prevent errors from malformed results.
311-322: LGTM! Complete bot response handling.The bot message is properly created, added to conversation state, persisted to AsyncStorage, added to conversationContext, and triggers auto-scroll. The async operations are well-sequenced.
335-338: LGTM! Paste handler correctly updates input.The handlePaste function retrieves clipboard text and calls handleInputChange to update the input field. This allows users to paste text without immediately sending it, giving them a chance to review or edit before sending.
346-436: LGTM! Comprehensive quick reply generation.The getQuickReplies function provides contextual suggestions based on missing fields from pending follow-ups. The switch statement covers a wide range of fields with appropriate suggestions. The deduplication via Set (Line 435) and limiting to 4 items provides a clean UX.
447-471: LGTM! Clean component composition.The main render elegantly composes the new modular components:
- ChatHeader for navigation and controls
- Conditional EmptyState or MessageList based on conversation state
- TypingIndicator integrated as MessageList footer for smooth scrolling
This achieves the PR's goal of modularization while maintaining a clean component hierarchy.
488-496: LGTM! ChatInput properly wired with all necessary props.The ChatInput component receives all required props for state management, event handling, and mode detection. The use of
handleInputChangeas a wrapper (Line 490) maintains flexibility for future input transformation logic.
284-309: The fallback is reachable whenprocessFollowUpResponse()returns null.The else block executes when
result && typeof result === 'object'is false. This occurs whenconversationContext.processFollowUpResponse()returnsnull(line 56 in ConversationContext.js) if there's no pending follow-up. While RAG query paths and Model mode always return objects, the follow-up handler's null return creates this fallback scenario.Consider clarifying the condition or adding a comment explaining when this null return is expected, since the current logic may be confusing to maintainers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
Frontend/src/model/model.jsx (1)
115-115: Inconsistent return value when memory check fails.When
checkMemoryBeforeLoadingreturnsfalse, this line returnsundefinedinstead offalse. This inconsistency can cause issues for callers expecting a boolean.🔎 Proposed fix
- if (!(await checkMemoryBeforeLoading(destPath))) return; + if (!(await checkMemoryBeforeLoading(destPath))) return false;Frontend/src/theme/ThemeContext.js (1)
36-36: Pre-existing typo: missing#prefix in dark theme text color.The
textproperty is missing the#prefix, which will result in an invalid color value.🔎 Proposed fix
- text: 'fff3b0', + text: '#fff3b0',
♻️ Duplicate comments (1)
Frontend/src/Screens/ChatScreen.jsx (1)
284-296: Scroll-to-bottom feature is now properly wired.The
onScrollPositionChangeprop is correctly passed toMessageList, addressing the incomplete feature flagged in the previous review. The scroll position detection inMessageList.jsx(lines 49-59) will now properly show/hide the scroll-to-bottom button.
🧹 Nitpick comments (7)
Frontend/src/Components/ChatScreen/MessageList.jsx (4)
7-13: Unusedthemedependency inuseMemo.The
markdownStylesobject uses hardcoded color values (#000000,rgb(218,79,122)) rather than values from thethemeprop. Either removethemefrom the dependency array or use theme colors for consistency.🔎 Suggested fix
- const markdownStyles = useMemo(() => ({ - body: { color: "#000000", fontSize: 16, lineHeight: 24 }, - heading1: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 }, - heading2: { color: "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 }, - strong: { fontWeight: "bold", color: "rgb(218,79,122)" }, - list_item: { marginVertical: 5 }, - }), [theme]); + const markdownStyles = useMemo(() => ({ + body: { color: theme?.text || "#000000", fontSize: 16, lineHeight: 24 }, + heading1: { color: theme?.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 10 }, + heading2: { color: theme?.primary || "rgb(218,79,122)", fontWeight: "bold", marginVertical: 8 }, + strong: { fontWeight: "bold", color: theme?.primary || "rgb(218,79,122)" }, + list_item: { marginVertical: 5 }, + }), [theme]);
20-43: Consider memoizingrenderItemto reduce re-renders.The inline
renderItemfunction is recreated on every render. For chat lists that may grow large, extracting and memoizing this withuseCallbackcan improve scroll performance.
44-44: Verifyitem.idis always defined to prevent key warnings.If any message lacks an
id, React will produce key warnings or undefined behavior. Consider a fallback:- keyExtractor={(item) => item.id} + keyExtractor={(item, index) => item.id ?? `msg-${index}`}
46-47: Redundant scroll-to-end triggers.Both
onContentSizeChangeandonLayoutcallscrollToEnd. Typically,onContentSizeChangealone suffices for auto-scrolling when new messages arrive. Triggering on both may cause double-scrolling or slight jank on initial layout.Frontend/src/hooks/useChatEngine.js (1)
136-136: Potential ID collision between user and bot messages.Using
Date.now() + 1for the bot message ID could collide if processing is very fast. Consider using a UUID or a more robust ID generation strategy.Frontend/src/Screens/ChatScreen.jsx (2)
1-1: Unused import:AsyncStorage.
AsyncStorageis imported but not used in this file. Chat persistence is handled inuseChatEngine.js.🔎 Proposed fix
-import AsyncStorage from '@react-native-async-storage/async-storage';
108-114: Consider addinginitModelto useEffect dependencies or suppressing the lint warning.The effect calls
initModelbut has an empty dependency array. While the guard on line 85 prevents re-entry, ESLint will likely warn about missing dependencies. Either add the dependency or use an eslint-disable comment with justification.useEffect(() => { initModel(); - // Cleanup on unmount return () => { unloadModel(); }; + // eslint-disable-next-line react-hooks/exhaustive-deps + // intentionally run only on mount; initModel has its own re-entry guard }, []);
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
Frontend/src/Components/ChatScreen/ChatInput.jsxFrontend/src/Components/ChatScreen/MessageList.jsxFrontend/src/Screens/ChatScreen.jsxFrontend/src/hooks/useChatEngine.jsFrontend/src/model/model.jsxFrontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (4)
Frontend/src/hooks/useChatEngine.js (1)
Frontend/src/model/model.jsx (6)
context(8-8)response(42-42)response(201-201)result(193-197)generateResponse(143-208)generateResponse(143-208)
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (5)
handlePaste(160-163)useRAGMode(60-60)modelError(56-56)userInput(59-59)handleSendMessage(118-154)Frontend/src/hooks/useChatEngine.js (1)
isGenerating(11-11)
Frontend/src/Screens/ChatScreen.jsx (6)
Frontend/src/hooks/useChatEngine.js (6)
useChatEngine(9-159)useChatEngine(9-159)clearConversation(46-50)sendMessage(52-151)conversation(10-10)isGenerating(11-11)Frontend/src/model/model.jsx (5)
context(8-8)downloadModel(70-100)downloadModel(70-100)unloadModel(130-141)unloadModel(130-141)Frontend/src/Components/ChatScreen/ChatInput.jsx (1)
ChatInput(5-64)Frontend/src/Components/ChatScreen/MessageList.jsx (1)
MessageList(6-63)Frontend/src/Components/ChatScreen/ChatHeader.jsx (1)
ChatHeader(5-41)Frontend/src/Components/ChatScreen/TypingIndicator.jsx (1)
TypingIndicator(5-37)
Frontend/src/model/model.jsx (1)
Frontend/src/context/AgentContext.jsx (3)
error(29-29)context(19-19)context(27-27)
🔇 Additional comments (9)
Frontend/src/model/model.jsx (2)
130-141: Good addition ofunloadModelfor resource cleanup.The new
unloadModelfunction properly releases the llama context and handles errors gracefully. This aligns with the cleanup pattern inChatScreen.jsx's unmount effect.
191-199: Timing instrumentation is helpful for performance monitoring.The added logging for inference duration provides useful debugging and performance tracking information.
Frontend/src/theme/ThemeContext.js (1)
8-18: Theme color updates and default theme switch look good.The color palette updates align with the redesigned UI components, and switching the initial theme to
defaultis consistent with the PR objectives.Also applies to: 63-63
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
23-25: Accessibility labels are well implemented.Good use of
accessibilityLabelon interactive elements for screen reader support.Also applies to: 55-56
27-46: Error retry flow is well structured.The conditional rendering for model error state with a retry action provides good UX for recovery from failed model loading.
Frontend/src/hooks/useChatEngine.js (2)
46-50:clearConversationis properly memoized.Good use of
useCallbackfor the clear function that's passed as a prop.
77-78: The concern is already addressed in the code. TheragService.initialize()method includes a guard clause (if (this.isInitialized) return;) that ensures it returns immediately on subsequent calls. While calling it on every message send might appear wasteful, it has negligible overhead due to the early return, and no side effects occur after the first initialization.Likely an incorrect or invalid review comment.
Frontend/src/Screens/ChatScreen.jsx (2)
25-48: Clean reducer pattern for model state management.The
modelReducerwith typed actions (START_INIT,SET_DOWNLOADING,SET_PROGRESS,INIT_SUCCESS,INIT_FAILURE) provides clear state transitions and is easier to debug than multipleuseStatecalls. TheinitModelcallback properly handles the download flow and error cases.Also applies to: 55-56, 83-106
117-154: Well-structured separation of concerns for message handling.The
handleSendMessagefunction correctly delegates core logic to theuseChatEnginehook while keeping UI-specific side effects (navigation, logout, emergency routing) in the screen component. This maintains a clean separation between business logic and presentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Frontend/src/model/model.jsx (1)
200-207: Missing cache write for successful responses.The
generateResponsefunction reads from cache viagetCache(cacheKey)but never callssetCacheto store new responses. This means the caching mechanism is ineffective—responses are never cached.🔎 Proposed fix to cache successful responses
const response = result?.text?.trim(); + + if (response && cacheKey) { + setCache(cacheKey, response); + } + return response;
🧹 Nitpick comments (4)
Frontend/src/model/model.jsx (1)
191-199: Consider removing or gating timing logs in production.The inference timing instrumentation is useful for development but may add noise in production logs. Consider gating these logs behind a
__DEV__check or a debug flag.- console.log("Starting inference..."); - const startTime = Date.now(); + if (__DEV__) console.log("Starting inference..."); + const startTime = __DEV__ ? Date.now() : 0; const result = await context.completion({ messages: messagesToSend, n_predict: 500, stop: stopWords }); - const endTime = Date.now(); - console.log(`⚡ Inference Time: ${endTime - startTime}ms`); + if (__DEV__) { + const endTime = Date.now(); + console.log(`⚡ Inference Time: ${endTime - startTime}ms`); + }Frontend/src/hooks/useChatEngine.js (3)
70-75: Stale closure issue mitigated butcurrentConversationtiming is fragile.The functional update pattern is used, but
currentConversationis assigned inside the state updater and used outside. While React guarantees synchronous execution of the updater function, relying on this side-effect assignment is fragile and may confuse future maintainers.A cleaner approach would be to use a ref to track the latest conversation:
const conversationRef = useRef(conversation); useEffect(() => { conversationRef.current = conversation; }, [conversation]); // Then use conversationRef.current when neededThis is minor since the current code works, but worth noting for maintainability.
9-11: Unused parameterscontextandrefreshContextshould be documented or removed.The hook accepts
refreshContextas a parameter but never uses it. Thecontextparameter is only passed toconversationContext.setUserContext(). Consider documenting their intended purpose or removing unused parameters.
81-82:initialize()is idempotent and already optimized.While
ragService.initialize()is called on everysendMessage, the implementation includes anif (this.isInitialized) return;guard (line 20 of RAGService.js) that makes subsequent calls instant no-ops. The expensive operation (loadIntentEmbeddings()) runs only once. However, the explicit call here is functionally redundant sinceprocessQuery()already includes the same initialization fallback (RAGService.js, line 635). Consider removing the redundant call on line 81 or deferring initialization to the first RAG operation to reduce unnecessary async invocations.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
Frontend/src/Components/ChatScreen/ChatInput.jsxFrontend/src/hooks/useChatEngine.jsFrontend/src/model/model.jsxFrontend/src/theme/ThemeContext.js
🚧 Files skipped from review as they are similar to previous changes (2)
- Frontend/src/Components/ChatScreen/ChatInput.jsx
- Frontend/src/theme/ThemeContext.js
🧰 Additional context used
🧬 Code graph analysis (1)
Frontend/src/hooks/useChatEngine.js (1)
Frontend/src/model/model.jsx (8)
context(8-8)response(42-42)response(201-201)result(193-197)startTime(192-192)generateResponse(143-208)generateResponse(143-208)endTime(198-198)
🔇 Additional comments (4)
Frontend/src/model/model.jsx (2)
70-100: Approve changes with a note on potential race condition.The refactored
downloadModelnow correctly attempts to load the model after download and returnsnullon failure, providing better error feedback. However, there's a potential race condition: ifdownloadModelis called twice concurrently with the same fileName, both calls could attempt to load the model simultaneously since there's no mutex/lock.For a production app, consider adding a download-in-progress guard (e.g., a Map tracking active downloads) to prevent concurrent downloads of the same file.
130-141: LGTM - Clean model cleanup implementation.The
unloadModelfunction properly releases the llama context, resets the module-levelcontextvariable, and handles errors gracefully. This integrates well with the chat screen unmount lifecycle.Frontend/src/hooks/useChatEngine.js (2)
116-141: Good: Backend fallback with timeout properly implemented.The
AbortControllerwith 15s timeout is correctly implemented, addressing the previous review concern. The timeout is cleared in both success and error paths. The fallback to local model on backend failure provides resilience.
52-62: Silent failure when context initialization fails.When
initializeContext()fails, only a warning is logged but execution continues. This may lead to confusing behavior if the context is required for subsequent operations. Consider either:
- Returning early with an error message to the user
- Or documenting why it's acceptable to continue without context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
Frontend/src/model/model.jsx (2)
144-217: Enhanced robustness and performance.The changes add input validation, timing instrumentation, and response caching. The reduction of
n_predictfrom 500 to 256 will improve response latency but may truncate longer responses. Ensure this tradeoff aligns with product requirements.Note: Monitor if users encounter truncated responses in practice. You may want to make
n_predictconfigurable based on query complexity or user preferences.
117-122: Consider platform-specific configuration for GPU and thread settings.The model initialization hardcodes
n_gpu_layers: 0andn_threads: 2for emulator stability. Production devices should benefit from GPU acceleration and additional threads. Implement device detection (DeviceInfo is already imported) to apply higher-performance settings on real devices, or move these values to environment-based configuration.Frontend/src/Screens/ChatScreen.jsx (1)
174-264: Comprehensive quick reply logic.The function covers a wide range of field types and provides helpful suggestions. The deduplication and limiting to 4 replies prevents UI clutter.
Consider extracting this to a separate utility file (e.g.,
utils/quickReplySuggestions.js) to keep the screen component focused on rendering and event handling. This would also make it easier to test and extend.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
Frontend/src/Components/ChatScreen/ChatInput.jsxFrontend/src/Components/ChatScreen/MessageList.jsxFrontend/src/Screens/ChatScreen.jsxFrontend/src/hooks/useChatEngine.jsFrontend/src/model/model.jsx
🧰 Additional context used
🧬 Code graph analysis (4)
Frontend/src/hooks/useChatEngine.js (4)
Frontend/src/context/AgentContext.jsx (3)
isInitialized(31-31)refreshContext(68-94)error(29-29)Frontend/src/model/model.jsx (8)
context(8-8)response(42-42)response(204-204)result(194-198)startTime(193-193)generateResponse(144-217)generateResponse(144-217)endTime(200-200)Frontend/src/theme/ThemeContext.js (1)
context(79-79)Frontend/src/Screens/ChatScreen.jsx (2)
clearConversation(72-84)useRAGMode(60-60)
Frontend/src/Components/ChatScreen/ChatInput.jsx (2)
Frontend/src/Screens/ChatScreen.jsx (5)
handlePaste(163-166)useRAGMode(60-60)modelError(56-56)userInput(59-59)handleSendMessage(121-157)Frontend/src/hooks/useChatEngine.js (1)
isGenerating(11-11)
Frontend/src/Screens/ChatScreen.jsx (3)
Frontend/src/hooks/useChatEngine.js (6)
useChatEngine(9-201)useChatEngine(9-201)clearConversation(49-54)sendMessage(56-193)conversation(10-10)isGenerating(11-11)Frontend/src/model/model.jsx (5)
context(8-8)downloadModel(70-100)downloadModel(70-100)unloadModel(131-142)unloadModel(131-142)Frontend/src/services/ConversationContext.js (2)
conversationContext(384-384)conversationContext(384-384)
Frontend/src/Components/ChatScreen/MessageList.jsx (3)
Frontend/src/theme/ThemeContext.js (1)
theme(63-63)Frontend/src/hooks/useChatEngine.js (1)
conversation(10-10)Frontend/src/Screens/ChatScreen.jsx (1)
flatListRef(62-62)
🔇 Additional comments (18)
Frontend/src/Components/ChatScreen/MessageList.jsx (3)
15-23: LGTM!The auto-scroll implementation correctly tracks conversation length changes and only scrolls when new messages are added.
30-65: Well-structured message rendering.The differentiation between user and bot messages is clear, with appropriate styling and the copy-on-long-press feature. Good use of optional chaining for the
onCopyMessagecallback.
69-78: LGTM!The scroll position tracking correctly detects when the user has scrolled away from the bottom, enabling the scroll-to-bottom button feature. The throttle value of 16ms is appropriate for smooth 60fps scrolling.
Frontend/src/model/model.jsx (2)
74-99: Improved error handling in download flow.The changes ensure that
loadModelis called after download and properly validates the result before returning. This prevents returning a path to a model that failed to load. The consistentnullreturns on failure improve error handling predictability.
131-142: LGTM!The new
unloadModelfunction properly handles cleanup with defensive checks and error logging. This supports the new model lifecycle management introduced in the PR.Frontend/src/Components/ChatScreen/ChatInput.jsx (3)
5-15: Well-designed component interface.The props provide clear separation between input state, generation state, and error handling, making the component easy to test and maintain.
31-53: LGTM!The conditional rendering properly handles model error states and provides clear user feedback. The
editableprop correctly restricts input when the model isn't ready in local mode.
55-67: LGTM!The send button correctly uses optional chaining (addressing the previous review comment) and has appropriate disabled states to prevent issues during generation or when the model isn't ready.
Frontend/src/hooks/useChatEngine.js (5)
15-36: LGTM!The chat history loading properly validates data before setting state and hydrating the conversation context. The empty dependency array ensures it only runs on mount.
39-54: LGTM!The
saveChatsfunction is now properly memoized (addressing the previous review comment), andclearConversationincludes a cancellation mechanism to prevent stale responses from appearing after clearing.
56-91: LGTM!The message ID generation now uses a robust combination of timestamp and random string (addressing the previous collision concern), and the functional state update ensures the latest conversation state is used (fixing the stale closure issue).
138-161: LGTM!The backend fallback now includes proper timeout handling with
AbortController(addressing the previous review comment), preventing indefinite hangs on network issues.
164-193: Cancellation guard prevents stale responses.The cancellation mechanism properly prevents responses from appearing after the conversation is cleared. The inclusion of
conversationin theuseCallbackdependencies (Line 193) ensures the function always has the current state, though this means the function is recreated on every message. This is a reasonable tradeoff for correctness over micro-optimization.Frontend/src/Screens/ChatScreen.jsx (5)
25-48: LGTM!The model state reducer provides clean, predictable state management for the model lifecycle. All necessary state transitions are covered.
55-69: Clean hook integration.The chat engine hook is properly integrated, and wrapping
baseClearConversationwith a confirmation dialog provides good UX.
86-117: LGTM!The model initialization correctly prevents duplicate downloads and properly cleans up on unmount with
unloadModel. The empty dependency array for the effect is appropriate since initialization should only happen once.
121-157: LGTM!The message sending flow correctly delegates core logic to the hook and handles UI-specific side effects (navigation, logout, emergency) in the screen component. Good separation of concerns.
274-330: LGTM!The component composition is clean and well-organized. The scroll-to-bottom button is now properly wired with the
onScrollPositionChangecallback (addressing the previous review comment), and all modular components are correctly integrated.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
|
@bhavik-mangla |
Closes #
📝 Description
This PR refactors the [ChatScreen] component to improve code quality, state management, and user experience.
🔧 Changes Made
Component Modularization
We split the large [ChatScreen.jsx] file into specific functional components located in
src/Components/ChatScreen/:ChatHeader: Handles the top bar, navigation, and model mode switching.MessageList: Manages the rendering of the chat history and auto-scroll logic.ChatInput: encapsulation of the text input area, send button, and attachment handling.EmptyState: Displays the initial "How can I help you?" view with suggestions.QuickReplies: Manages the floating suggestion chips for follow-up questions.TypingIndicator: A dedicated component for the "BabyNest is thinking..." animation.UseChatEngine: A hook that manages the agent and model.Logic & State Refactor ([ChatScreen.jsx]
useReducerfor cleaner logic.MessageList, ensuring it scrolls naturally with messages.📷 Screenshots or Visual Changes (if applicable)
Video : https://drive.google.com/drive/folders/1shZI6mKrruf4Kg-hpjFUZVhpAPHYVoN3?usp=sharing
🤝 Collaboration
Collaborated with: na
✅ Checklist
Summary by CodeRabbit
New Features
Refactor
Style
✏️ Tip: You can customize this high-level summary in your review settings.