Conversation
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
|
have you tested below inference use cases with the DLC container
could you please attach the test details on the description ? |
|
I think this change will also be required for the MXNET DLC containers with MMS, instead of adding a new transformer and adapting the handler service in the pytorch toolkit, could we consider adapting the transformer and handler service in the inference toolkit to work with the |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Makes sense. I will split the code and re-run some tests. Will post the test results afterward |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Description of changes:
Enabling multi gpu support. It passes context information to hander functions so that model/data can be assigned to multiple gpu devices.
input_fn(input_data, content_type)toinput_fn(input_data, content_type, context).input_fn(input_data, content_type)will still work.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.