Create and support data collection (Nginx/Lua), processing (Java/Python/SQL + Hadoop/Hive/PrestoDB) and related tools.
As DevOps: create testing tools and infrastructure for Big Data products. Update, create and maintain cloud infrastructure for data collection and processing (25M data requests and 250M data-points per hour at peak, 1.5Tb data per day).
Create build and testing infrastructure (Continues Integration and Delivery, CircleCI).
Maintain data processing pipeline (Hadoop/Hive/ PrestoDB/S3/Redshift/Azure Store).
Implementing cost-saving strategies for data pipeline.
Migrate resources management to container-based solutions (Docker/AWS ECS/Azure ACS/DCOS/ Mesos). Migrate from Hadoop-based processing to Spark.
Development of front-end one-page application with CoffeScript/Backbone+Marionette, development back-end with RoR+Spree Ecommerce.
Create online designer service to customize customer goods.
Development of new features and working on bugs and issues.
Some major features in which implementation
I was participated : Implementing new unit-testing engine. Integrating internationalization and localization into production system. Integrating brand new data sources for site. Moving to new visualization tools.
Master's degree, Computer Engineering