Python Analytics Engineer
Overall Purpose:
The candidate will be responsible for requirements gathering and analysis, design, development, testing and debugging of analytical, statistical/mathematical programming, optimization, decision support systems and network function virtualization systems. Additionally, will work with a large amount of data from diverse, complex and frequently unrelated sources and be able to perform data collection, data analyses/mining, data wrangling/cleansing, data integration and occasional mathematical/data modeling with little or no guidance.
Role and Responsibilities:
1) Requirements gathering and analysis; architect solution with system design, system engineering and appropriate process flow and implement with Agile methodology.
2) Data acquisition, data ETL, data cleansing, data mining, data integration with data quality controls, process management and statistical analyses techniques.
3) Implement data modeling, statistical/mathematical, linear or optimization programming models.
4) Development of high performance, distributed computing tasks using Big Data technologies such as extensive usage Python, SQL and some usage of Hadoop, NoSQL, Snowflake, Github and other data mining/management techniques in distributed environments.
5) Design and develop software (web based or otherwise using Python to automate Data feeds) for decision support systems, optimization systems, planning systems or other data mining/modeling applications for mobility, network engineering or enterprise solutions.
6) Web-applications with multiple levels/dimensions of securities, inside/outside firewalls, eCommerce like with encryption or equivalent technologies.
7) Write code, complete programming and documentation, and perform testing and debugging of applications using programming languages and technologies in Unix/Linus, VM and web environments. 8) Setup and administer document/file transfer/depository systems such as FTP, SSH, Sharepoint drive or other similar technologies.
Project Overview
- Scorecard (PowerBI): Developed over 7-8 years to measure the benefits of website changes (layout, byflow, etc.).
- Metrics:
- Call shed (calls not received)
- Chat shed (chats not received)
- Goal: Reduce both metrics and create a scalable model that will allow other areas of the company to leverage the data as well.
Requirements
- Automation:
- Tools: Snowflake, PS, Excel, Power BI.
- Heavy use of Python and Object-Oriented Python.
- Address backend data challenges (location, transfer, analytics).
- Build a scalable process from start to end.
- Data needs to be consistent and absorbable from various sources.
Ideal Candidate
- Skills:
- Self-starter, independent worker.
- Proficient in Python and GitHub.
- Strong understanding of data and SQL.
- Ability to understand and automate existing code.
- Bonus: Experience with Power BI calculations.
- Responsibilities:
- Build and automate processes.
- Articulate and champion necessary resources and systems.
- No existing backlog; first 30 days to understand current processes.
Overall Purpose:
The candidate will be responsible for requirements gathering and analysis, design, development, testing and debugging of analytical, statistical/mathematical programming, optimization, decision support systems and network function virtualization systems. Additionally, will work with a large amount of data from diverse, complex and frequently unrelated sources and be able to perform data collection, data analyses/mining, data wrangling/cleansing, data integration and occasional mathematical/data modeling with little or no guidance.
Role and Responsibilities:
1) Requirements gathering and analysis; architect solution with system design, system engineering and appropriate process flow and implement with Agile methodology.
2) Data acquisition, data ETL, data cleansing, data mining, data integration with data quality controls, process management and statistical analyses techniques.
3) Implement data modeling, statistical/mathematical, linear or optimization programming models.
4) Development of high performance, distributed computing tasks using Big Data technologies such as extensive usage Python, SQL and some usage of Hadoop, NoSQL, Snowflake, Github and other data mining/management techniques in distributed environments.
5) Design and develop software (web based or otherwise using Python to automate Data feeds) for decision support systems, optimization systems, planning systems or other data mining/modeling applications for mobility, network engineering or enterprise solutions.
6) Web-applications with multiple levels/dimensions of securities, inside/outside firewalls, eCommerce like with encryption or equivalent technologies.
7) Write code, complete programming and documentation, and perform testing and debugging of applications using programming languages and technologies in Unix/Linus, VM and web environments. 8) Setup and administer document/file transfer/depository systems such as FTP, SSH, Sharepoint drive or other similar technologies.
Project Overview
- Scorecard (PowerBI): Developed over 7-8 years to measure the benefits of website changes (layout, byflow, etc.).
- Metrics:
- Call shed (calls not received)
- Chat shed (chats not received)
- Goal: Reduce both metrics and create a scalable model that will allow other areas of the company to leverage the data as well.
Requirements
- Automation:
- Tools: Snowflake, PS, Excel, Power BI.
- Heavy use of Python and Object-Oriented Python.
- Address backend data challenges (location, transfer, analytics).
- Build a scalable process from start to end.
- Data needs to be consistent and absorbable from various sources.
Ideal Candidate
- Skills:
- Self-starter, independent worker.
- Proficient in Python and GitHub.
- Strong understanding of data and SQL.
- Ability to understand and automate existing code.
- Bonus: Experience with Power BI calculations.
- Responsibilities:
- Build and automate processes.
- Articulate and champion necessary resources and systems.
- No existing backlog; first 30 days to understand current processes.