-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Update bedrock_guardrails.py #16022
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update bedrock_guardrails.py #16022
Conversation
prioritized over the configuration parameters.
|
@kothamah is attempting to deploy a commit to the CLERKIEAI Team on Vercel. A member of the Team first needs to authorize it. |
| #If the content is not blocked then will pass the masked content from guardrail into LLM call | ||
|
|
||
| # if user opted into masking, return False. since we'll use the masked output from the guardrail | ||
| if self.mask_request_content or self.mask_response_content: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add testing for this @kothamah
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krrishdholakia Thanks Krirsh for reviewing this. I have added test cases for the bedrock guardrail changes.
|
@krrishdholakia @TeddyAmkie @ishaan-jaff This change is required for avoiding PII data to be sent to LLM. I was not be able to execute the updated code due to restrictions in our environment, but certainly can test if this change can be added to any LiteLLM version. This is the output of the bedrock guardrails. [92m17:59:03 - LiteLLM Proxy:DEBUG[0m: bedrock_guardrails.py:408 - Bedrock AI response : {'action': 'GUARDRAIL_INTERVENED', 'actionReason': 'Guardrail blocked.', 'assessments': [{'contentPolicy': {'filters': [{'action': 'BLOCKED', 'confidence': 'HIGH', 'detected': True, 'filterStrength': 'LOW', 'type': 'PROMPT_ATTACK'}]}, 'invocationMetrics': {'guardrailCoverage': {'textCharacters': {'guarded': 4430, 'total': 4430}}, 'guardrailProcessingLatency': 686, 'usage': {'automatedReasoningPolicies': 0, 'automatedReasoningPolicyUnits': 0, 'contentPolicyImageUnits': 0, 'contentPolicyUnits': 5, 'contextualGroundingPolicyUnits': 0, 'sensitiveInformationPolicyFreeUnits': 0, 'sensitiveInformationPolicyUnits': 5, 'topicPolicyUnits': 0, 'wordPolicyUnits': 5}}, 'sensitiveInformationPolicy': {'piiEntities': [{'action': 'NONE', 'detected': True, 'match': '[REDACTED]', 'type': 'NAME'}, {'action': 'NONE', 'detected': True, 'match': '[REDACTED]', 'type': 'PHONE'}, {'action': 'NONE', 'detected': True, 'match': '[REDACTED]', 'type': 'EMAIL'}]}}], 'blockedResponse': 'Sorry, your question in its current format is unable to be answered.', 'guardrailCoverage': {'textCharacters': {'guarded': 4430, 'total': 4430}}, 'output': [{'text': 'Sorry, your question in its current format is unable to be answered.'}], 'outputs': [{'text': 'Sorry, your question in its current format is unable to be answered.'}], 'usage': {'automatedReasoningPolicies': 0, 'automatedReasoningPolicyUnits': 0, 'contentPolicyImageUnits': 0, 'contentPolicyUnits': 5, 'contextualGroundingPolicyUnits': 0, 'sensitiveInformationPolicyFreeUnits': 0, 'sensitiveInformationPolicyUnits': 5, 'topicPolicyUnits': 0, 'wordPolicyUnits': 5}} I have attached the screenshots from the existing code execution and updated the function to prioritize the blocking before reaching LLM.
|
|
This is the configuration used. guardrails:
|
|
@kothamah sounds good, i just need you to add a test for your changes inside |
Updated the function to reflect the flow
|
@krrishdholakia I have added test cases for the bedrock guardrail changes, so please check and let me know for any questions. |


prioritized over the configuration parameters.
Title
litellm bedrock guardrails block content feature : Litellm need to block the PII content before sending to LLM.
This change is corressponds to the fix required for implementing guardrails and not share PII data with models.
EXT_ Re_ EXT_ Re_ EXT_ Re_ EXT_ Next Release.msg