We are seeking a skilled and motivated Data Scientist.
This individual will play a pivotal role in identifying key data points for collection, developing strategies to accumulate data and deriving actionable insights an anomaly based on a solid foundation of relevant know-how. Also, will also be responsible for creating, testing, and deploying scripts and methods for data collection and analysis to support decision-making.The Engineer will collaborate with cross-functional teams to identify critical data sources to determine the most effective data collection strategies, will develop automated and scalable data collection pipelines, will Ensure data quality, integrity, and consistency across all sources and may use AI techniques to refine the results toward failures predictions.
Requirements: Basic Qualifications
Bachelors degree in computer science, Data Science, Engineering, Mathematics, or a related field.
Advanced degrees in data science or Machine learning / AI - Advance.
Proficiency in programming languages such as Python, R, or MATLAB.
Strong understanding of data manipulation and analysis tools (e.g., Pandas, NumPy, SQL).
Understanding of high speed interfaces such as Ethernet, PCI-E , WiFi.
Experience with data visualization tools such as Tableau, Matplotlib, Graphana.
Strong analytical and critical-thinking skills to identify patterns and outliers.
Customer-obsession, Think and act with the customer in mind!
Goal-driven, Self-motivated, be able to work independently and with teams with people around the globe.
Entrepreneurial, open-minded behavior and can-do attitude.
Required Experience
Experience with data manipulation and analysis tools (e.g., Pandas, NumPy, SQL).
Machine learning and AI techniques and frameworks (e.g., TensorFlow, Scikit-learn).
Proven ability to manage multiple tasks and meet deadlines.
Preferred Experience
Embedded Firmware development with C-language, scripting with Python or other equivalent programming languages.
Masters degree in a relevant field.
Experience with cloud platforms (e.g., AWS, Azure, GCP) for data storage and processing.
Familiarity with big data technologies (e.g., Hadoop, Spark).
Knowledge of engineering design tools and processes.
This position is open to all candidates.