# Overview Refactor the profile scraper in `scrapers/profiles.go` to just pull data from the below API and create a `parser/profiles.go` file to parse that data into our preexisting format, maybe with more fields for more information if there is any. ## Information from the Office of Research and Innovation that maintains UT Profiles: > Here's the documentation I created for the [profiles.utdallas.edu](https://profiles.utdallas.edu/) API: https://documenter.getpostman.com/view/8354395/TzCHApzg . It's a public API. The one thing I would strongly suggest is to try to limit the scope of your calls so as to not overload our server. (Some faculty have thousands of publications, so grabbing all that data is quite intensive.) You might, for example, start by grabbing the list of profiles at https://profiles.utdallas.edu/api/v1, then loop through in chunks and pull something like > > https://profiles.utdallas.edu/api/v1?person=herve.abdi;nimali.abeykoon;robert.ackerman&with_data=1&data_type=information;areas > > Alternatively, just loop through each of the schools (AHT, BBS, ECS, etc), like this: > > https://profiles.utdallas.edu/api/v1?from_school=AHT&with_data=1&data_type=information;areas # Other Issues This issues would replace #81 but probably won't have any effect of #44.