For our Swiss customer located in Basel we are currently looking for (m/f): a Professional Developer (Scientific & Manufacturing) - Two positions
- Type of placement: 5 months, external contract with possible long term extension
- Company: major Pharma company in Basel area
- Salary: level 2, please ask us for the exact rate range
(monthly all in salary levels in CHF, level1= 5-10k , level2=10-15k, level3= >15k)
We are looking for two Python Developers with expertise in big data systems to further develop the Remote Monitoring Patient Platform collecting sensor data from digital devices handed out in clinical studies. The successful candidate will establish processes and implementations to enable and facilitate the mining of digital biomarkers out of the aforementioned data.
Task and responsibilities
- Develop the computational backend of applications used to discover Digital Biomarkers
- Help to automate the creation of reports and dashboards supporting the analytics driven decision taken from the sensor data collected
- Participate in technical decision regarding the implementation of new algorithm to mine sensor data
- Enhance the existing infrastructure that handles the data collected from sensors used in clinical trials
- Proactively collaborate with a team comprising data scientists, software engineers and life science experts
- Start date: 30.07.2018
- Latest start date: 01.09.2018
- End date: 31.12.2018
- Extension: An extension is planned, but needs to be approved.
- Work load: 100%
- Work location: Basel, Building 92
- Home Office: To be agreed by the line manager (possible, but prior approval needed).
- Travel: Not required.
- Department: pREDi (pRED Informatics, Digital Biomarkers) Project name: Digital Biomarker Program
- Bachelor’s with emphasis on coursework of a quantitative nature (e.g., Computer Science, Engineering, Mathematics, Data Sciences).
- 5 to 7 years of software development experience
- Deep knowledge of Python and one or more of the following programming languages: Scala, Go, Rust and Java Experience with big-data systems (in particular Hadoop, Apache Spark, Apache Kafka and Mapreduce) with respective ecosystems
- Experience in to write and maintain ETLs (extract, transform, load) pipelines which operate on a variety of structured and unstructured sources.
- Experience with SQL and NoSQL data stores
- Previous working experience with the common Python data analysis (e.g. NumPy/SciPy, Pandas, Scikit-learn, SQLAlchemy, etc) and, ideally, data pipelining (e.g. Luigi) libraries
- Knowledge of UNIX internals and workload management systems (SLURM, SGE /UGE)
- Ability to write standards-compliant database related code for MongoDB and MySQL
- Strong working knowledge of best coding practices (versioning, TDD, debugging)
- Strong analytical skills combined with conceptual thinking and structured working style; ability to work in a multicultural team
- Fluent in English
Nice to Haves:
- Experience with the Microsoft Azure environment
- Previous working experience within an AGILE environment (ideally SCRUM)
- Project-based work experience in the pharmaceutical industry/consulting preferred