Gregory Bagramyan

Data Analyst

Hi and welcome to my portfolio website!

Keep scrolling and find out more about my latest portfolio projects, the technologies I use, my professional and educational background and more!

Happy visit :)

First let’s have a look at my latest project the DBT on Google Cloud Platform project. In this project I deploy my DBT (Data Build Tool) project on Google Cloud Platform using Google Cloud services like Artifact Registry, Cloud Run and Cloud Composer. There is also CI/CD involved through GitHub actions.

By Clicking below you will find 3 articles, one on the infrastructure, one on the DBT project itself and one on the data visualization of the data used in the project (the last one is still work on progress)

Click to check the DBT on Google Cloud Platform Project -

Click to check the DBT on Google Cloud Platform Project -

Or Click here to check the project

Now let’s see what tools did I use during my 3 years of career either professionally or during a personal project.

Data Transformation:

Data Bases / Platforms:

  • Snowflake used at 2K for 2 years

  • PostgreSQL used at Xeneta for a year and 2K for 2 years

  • Databricks used at 2K for 2 years

Data Visualization tools:

  • Tableau used at 2K for 2 years

  • Plotly (Python library) used at 2K for 2 years

  • Amazon Quicksight used at Xeneta for a year and 2K for 2 years

  • Metabase used in a personal project

  • PowerBI used in this project (work in progress)

Languages:

  • Python:

    • I used Python at Xeneta for a year for automating data cleaning and processes

    • I used Python at 2K for 2 years for building data visualizations, self-service data analysis notebooks, personal data analysis notebooks and data pipelines.

    • Libraries I use: pandas, numpy, plotly, psycopg2, sqlalchemy

  • SQL:

    • I used SQL at Xeneta for a year answering ad hoc questions from customers or customer success people.

    • I used SQL at 2K for 2 years for catching cheaters using known gameplay patterns

    • In SQL I use CTEs and window functions

Finally I used Excel:

  • As the main data cleaning tool at Xeneta for a year cleaning over 130 datasets using formulas like VLOOKUP, TRIM, LEFT, RIGHT, REPLACE, SEARCH and more.

  • For labeling data for the data science team (finding if a player is a cheater or not using Python and SQL) and porting old workflows from Excel to Python.

Next let’s talk about my professional background

Xeneta is a fast paced, fast growing data scale-up that revolutionize the logistics industry using data. To be more specific it provides prices benchmarks for shipping containers across the word, and industry that until now was very opaque.

I worked there for a year and started as a Junior Data Analyst. My main job was to clean the price data coming from the customers and uploading it to the platform. This is how the company was able to provide price aggregates on it’s platform. Later, as I took more responsibilities during the year, like answering questions with SQL and being in charge of the workload distribution dashboard (Amazon Quicksight), I got promoted to Data Analyst. During my data cleaning duties I started with Excel and once I figured that the format from a customer was consistent I would migrate the logic to a Python script that would later be uploaded to GitHub and ran automatically with a human checking step obviously.

Here is what I put on my CV to talk about my journey at Xeneta:

• Data Processing: Cleaned, prepared, and quality-checked over 130 dataset from various sources, reducing data processing time by 17%.

• Dashboard Management: Maintained, and improved a dashboard using Amazon QuickSight to solve business problems and support decision-making.

• Advanced SQL: Answers clients and customer success questions using advanced SQL queries (CTE, window functions, subqueries, joins)

• Automation: Developed Python scripts to automate data processing tasks, enhancing efficiency and accuracy. Documented and published scripts on GitHub.

• Communication: Communicated analytical results and insights to multiple stakeholders, including customer success, product, data science, tech, sales, and management teams.

2K is a well known video game publisher, known for big franchises like NBA2K, Borderland, Civilization, Mafia and more.

There I started as a Game Security Analyst but since the beginning I was doing Data Analyst tasks so after a while we decided to make it an official Data Analyst role and naming it Game Security Data Analyst.

I have two main jobs at 2K: helping the Senior Game Security Analyst catching cheater using my technical Data Analyst skills (more on that later) and reporting on the Game Security Team’s product -> our anti-cheat software.

To help the Senior Game Security Analyst catch cheaters I would build self-service data analysis python notebooks or directly write SQL queries to match patterns he knew were fraudulent.

One project I’m really proud of is automating ban waves using python scripts and Amazon lambda functions. It required designing the workflow, writing the logic and collaborating with our cloud architect to deploy the scripts on the cloud.

Here is what I put on my CV to talk about my journey at 2K:

• Data Analysis: Found cheaters through abnormal user data using advanced SQL to query big data datasets on Snowflake and Postgres databases.

• Data Visualization: Created interactive dashboards using Tableau and Plotly, enabling dynamic data exploration, and better reporting and decision-making.

• Automation / Process Optimization: Developed Python scripts and automated ban waves workflows significantly reducing manual effort and increasing operational efficiency.

• Data Engineering: Created data pipelines with Python, tables, views and cron jobs within PostgreSQL.

• Project Management: Managed and prioritized multiple tasks and requests autonomously.

Let’s now focus on me educational background

I have a Master’s degree in behavioral economics. This is where I started learning about data analysis Python, R, SQL and understood that this is what I wanted to do at work. During my master thesis on scarcity marketing I couldn’t use a real life experiment because of the pandemic so I decided to use a real life dataset provided my StockX the main platform for sneakers reselling. The dataset was only about 100k row but at the time it felt huge. Using R to simplify this huge amount of rows into simple visualizations was an amazing feeling that still pushes me to be a Data Analyst

Thank you for scrolling this far!

You can find my contact info below :)