EverWatch is a government solutions company providing advanced defense, intelligence, and deployed support to our country’s most critical missions. We are a full-service government solutions company. Harnessing the most advanced technology and solutions, we strengthen defenses and control environments to preserve continuity and ensure mission success.
Founded in 2010, BrainTrust is an ever-growing and evolving company, with focus areas of Software Engineering (Machine Learning, Cloud Computing, HPC, Data Mining, HLT), Mission Operations (System Integration, Sensors, Deployment, Training, Support), and System Engineering (System Design, Requirements, Process Engineering, Resource Allocation).
In August of 2020, BrainTrust joined forces with EverWatch. This integration will enhance the merged company’s set of technical capabilities, mission expertise, and contract presence for intelligence community customers.
BrainTrust employees are focused on tackling the most difficult challenges of the US Government. We offer the best salaries and benefits packages in our industry - to identify and retain the top talent in support of our critical mission objectives.
Clearance: The majority of positions require a Top Secret security clearance, based on current background investigation (SBI), as well as the favorable completion of polygraph. Clearance and polygraph processing will be completed by the U.S. Government.
BrainTrust is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), gender identity, sexual orientation, national origin, age (40 or older), disability, genetic information, citizenship or immigration status, and veteran status or any other factor prohibited by applicable law
The candidate should be knowledgeable in Python and has experience analyzing data sets, developing analytics and automating processes, and creating machine learning models. Tasks may include:
- Produce data visualizations that provide insight into dataset structure and meaning
- Work with subject matters experts (SMEs) to identify important information in raw data and develop scripts that extract this information from a variety of data formats (e.g., SQL tables, structured metadata, network logs)
- Incorporate SME input into feature vectors suitable for analytic development and testing
-Translate customer qualitative analysis process and goals into quantitative formulations that are coded into software prototypes
- Develop and implement statistical, machine learning, and heuristic techniques to create descriptive, predictive, and prescriptive analytics
- Develop statistical tests to make data-driven recommendations and decisions
- Develop experiments to collect data or models to simulate data when required data are unavailable
- Develop feature vectors for input into machine learning algorithms
- Identify the most appropriate algorithm for a given dataset and tune input and model parameters
- Evaluate and validate the performance of analytics using standard techniques and metrics (e.g. cross validation, ROC curves, confusion matrices)
Bachelor's and Master's degree from an accredited college or university in a quantitative discipline (e.g., statistics, mathematics, operations research, engineering or computer science). Five years of experience analyzing datasets and developing analytics, five years of experience programming with data analysis software such as R, Python, SAS, or MATLAB. An additional two years of experience in software development, cloud development, analyzing datasets, or developing descriptive, predictive, and prescriptive analytics can be substituted for a Master's degree. A PhD from an accredited college or university in a quantitative discipline can be substituted for three years of experience.
Any exposure to algorithm development, machine learning, and related tools/disciples. This work will have a s/w aspect to it - but no specific language or tool as this position is currently evolving. The ability/wherewithal to wrangle data of varying styles, types, pedigrees into a form that allow the equivalent of pattern recognition type analytics.