Skip to content

smarth-tech/AI-agent-

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Recruiter Project

Overview

This project is an AI-driven recruitment tool designed to scrape and analyze GitHub and Google Scholar profiles to identify top candidates for AI and ML roles. The tool classifies input queries, fetches profile details, and ranks candidates based on their GitHub repositories and Google Scholar citations.

File Structure

Workflow, code structure

graph TD

subgraph Query Classifier
    A1[query_classifier.py] --> B1[scangit.py]
    A1 --> C1[scangs.py]
    A1 --> D1[filter.py]
end

subgraph GitHub Scorer
    B1 --> B2[github.py]
    B2 --> B3[UserGitHubDetails]
end

subgraph Google Scholar Scraper
    C1 --> C2[googlescholar.py]
end

subgraph Filter Authors
    D1 --> D2[prof.py]
    D2 --> D3[authors.py]
end

A1 --> B1
A1 --> C1
A1 --> D1
B1 --> B2
B2 --> B3
C1 --> C2
D1 --> D2
D2 --> D3
Loading

candidate.py

Defines the Candidate class and methods to calculate scores based on GitHub activity, university, and other criteria.

github.py

Fetches GitHub repositories, calculates repository scores, and aggregates them to provide an overall GitHub score for a user.

university.py

Contains a dictionary of predefined university scores and methods to calculate university scores for candidates.

prof.py

Stores professor data, including names, universities, and homepage URLs.

normalise.py

Normalizes the scores of candidates and sorts them based on the overall normalized score.

googlescholar.py

Fetches data from Google Scholar, calculates relevance scores, and provides citation information for profiles.

authors.py

Scrapes co-authors from Google Scholar citations and returns a list of potential students and collaborators.

filter.py

Filters out professors from the list of authors and classifies remaining individuals based on their roles and degree types.

student_scrape.py

Fetches detailed information for students, including their GitHub and Google Scholar profiles, and creates Candidate objects.

requirements.txt

Lists all the required Python libraries for the project, ensuring all dependencies are installed.

Requirements

To set up the project environment, ensure you have Python installed and then install the required libraries using requirements.txt.

Usage

Setting Up

  1. Clone the repository.
  2. Create a virtual environment and activate it:
    sh
    Copy code
    python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
  3. Install the dependencies:
  4. sh
    Copy code
    pip install -r requirements.txt

Running the Project

  1. Extracting Student Details: Run 'app.py' and click on the dynamic link generated to launch the web app.( Can use ctrl+Click on windows and Commans+Click on MacOS on the link to open it in a new tab)

    sh
    Copy code
    python app.py
  2. Processing Queries: To classify input queries and fetch top profiles, run the query processing script: app.py This is the most relevant one and should be the only one of the user's concern. It calls other scripts and returns the suitable databse of candidates.

    Example usage query: #Please pick one of Boston, California, Seattle, Berkeley. #Sample queries: #"Find top 6 students who have worked on TensorFlow and have a strong GitHub presence in Boston." #"Recruit top 5 students in California who have worked on computer vision projects." #"Find top 8 programmers in Seattle who have worked on GPT-3 and have published papers on NLP." #"Recruit top 3 scholars in Boston ." #top 8 people who have worked in AI labs in Boston.

About

Hire best AI and ML engineers from top AI Labs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 90.1%
  • HTML 7.8%
  • Dockerfile 1.5%
  • Other 0.6%