Custom application looks for new files in particular folders an S3 bucket and interacts with the Metadata API based on the data in the delimited file
- Obtained list of current OCLC Numbers based on a given OCLC Number
- Obtain list of merged OCLC Numbers based on a given OCLC Number
- Set Holdings based on a given OCLC Number
- Delete Holdings based on a given OCLC Number
- Add an LBD based on a given OCLC Number and 500 note data
Clone this repository
$ git clone {url}or download directly from GitHub.
Change into the application directory
$ python -m venv venv
$ . venv/bin/activate$ pip install -r requirements.txt$ python -m pytestusage: processSheet.py [-h] --itemFile ITEMFILE --operation
                  {getCurrentOCLCNumbers, retrieveMergedOCLCNumbers, setHoldingsbyOCLCNumber, deleteHoldingsbyOCLCNumber, addLBDs}
                  --outputDir OUTPUTDIR
optional arguments:
  -h, --help            show this help message and exit
  --itemFile ITEMFILE   File you want to process
  --operation {getCurrentOCLCNumbers, retrieveMergedOCLCNumbers, setHoldingsbyOCLCNumber, deleteHoldingsbyOCLCNumber, addLBDs}
                        Operation to run: getCurrentOCLCNumbers, 
                        retrieveMergedOCLCNumbers, setHoldingsbyOCLCNumber, 
                        deleteHoldingsbyOCLCNumber, addLBDs
  --outputDir OUTPUTDIR
                        Directory to save output to                                                                       
                        $ python processSheet.py --itemFile samples/oclc_numbers.csv --operation getCurrentOCLCNumbers --outputDir samples/getCurrentOCLCNumbers.csv
$ python processSheet.py --itemFile samples/oclc_numbers_holdings.csv --operation retrieveMergedOCLCNumbers --outputDir samples/mergedOCLCNumbers.csv
$ python processSheet.py --itemFile samples/sp_holdings.csv --operation setHoldingsbyOCLCNumber --outputDir samples/addedHoldings.csv
$ python processSheet.py --itemFile samples/my_retentions.csv --operation deleteHoldingsbyOCLCNumber --outputDir samples/removedHoldings.csv
$ python processSheet.py --itemFile samples/symbol_retentions.csv --operation addLBDs --outputDir samples/newLBDs.csvDownload node and npm and use the install command to read the dependencies JSON file
$ npm install- Install AWS Command line tools
- https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html I recommend using pip.
- Create an AWS user in IAM console. Give it appropriate permissions. Copy the key and secret for this user to use in the CLI.
- Configure the command line tools - https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
- Make sure you add -- key/secret -- region
- Use the AWS Console to create a bucket. Note your bucket name!!!
- Create folder metadata_tasks/
- Add a sample csv file named getCurrentOCLCNumbers.csv with data to check for current OCLC Numbers
- Add a sample csv file named getMergedOCLCNumbers.csv with data to check for merged OCLC Numbers
- Add a sample csv file named addHoldings.csv with data to add holdings by OCLC Number
- Add a sample csv file named deleteHoldings.csv with data delete holdings by OCLC Number
- Add a sample csv file named LBDsToAdd.csv with data to add LBD records to OCLC Numbers
- 
Alter s3-getCurrentOCLCNumbers.json to point to your bucket and your sample txt file. 
- 
Use serverless to test locally 
$ serverless invoke local --function getCurrentOCLCNumbers --path s-getCurrentOCLCNumbers.json- 
Alter s3-getMergedOCLCNumbers.json to point to your bucket and your sample csv file. 
- 
Use serverless to test locally 
$ serverless invoke local --function getMergedOCLCNumbers --path s3-getMergedOCLCNumbers.json- 
Alter s3-addHoldings.json to point to your bucket and your sample csv file. 
- 
Use serverless to test locally 
$ serverless invoke local --function addHoldings --path s3-addHoldings.json- 
Alter s3-deleteHoldings.json to point to your bucket and your sample csv file. 
- 
Use serverless to test locally 
$ serverless invoke local --function deleteHoldings --path s3-deleteHoldings.json- 
Alter s3-addLBDs.json to point to your bucket and your sample csv file. 
- 
Use serverless to test locally 
$ serverless invoke local --function addLBDs --path s3-addLBDs.json$ serverless deploy