Image uploaded from iOS (2) copy.jpeg

PwC / SKY Mexico

March to June 2017


The Product

  • 3 months of research and prototyping
  • 1 month pilot test with 20 sales reps from SKY Mexico 
  • Sales app to track of leads, sales, rejections, and places to return to
  • Pilot took me (product design) and an engineer 4 weeks to research, ideate, prototype, and launch
  • Project cut short due to shift in SKY Mexico's organizational priorities

Onboarding view of the map sales app

We designed two 'states': one that would show you your past customer interactions; and one which made it easier to 'pin' your interactions. 

Overall we were trying to solve a number of problems that our sales representative users had. From tracking sales information, communicating with their managers and coworkers, to seeing their own progress, we attempted to solve these problems using a lean methodology. 




Between March and June of 2017, I was a product designer on a team tasked with a high-level goal to explore and experiment possible software solutions for SKY Mexico. PricewaterhouseCoopers (PwC) partnered with Google to provide software solutions to companies like SKY Mexico (a large satellite television provider with almost 8 million subscribers). 

Specifically, the users we were focusing on were kiosk + door-to-door sales representatives, satellite installers, and their respective managers. Our team was tasked to explore customer needs and build quick prototypes to validate different ideas. 

My role

I worked with a 'balanced team' alongside 1 other designer,  2 engineers, and a product strategists throughout every step of the process. At Philosophie, we consider a 'balanced team' in which every member is part of the process - from ideation to implementation. 

From exploration, to user research, and coding a number of prototypes, I had a multifaceted role on this project.




Heavily influenced by the Sprint GV method, we spent the first few weeks working alongside our stakeholders to understand the business needs and goals of both PwC and SKY Mexico. 

Strategy Phase: Working alongside stakeholders

This project had very concrete success metrics without having explicit solutions outlined by our stakeholders. That was a great starting point for our kickoff because it gave us constraints as to what the final outcome should look like, without being too prescriptive on how we should solve the problem. We developed a set of 'objectives and key results' (OKRs) alongside our PwC stakeholders.

After that, we set out to define our 'product principles'; these are a set of beliefs and intentions rooted in the entire team's values and vision. This included our PwC stakeholders and the team at Philosophie. I like to think about it as a 'persona' for our team; in the same way you use personas to help make decisions on behalf of your users, we used these product principles to help us get over any impasses. Some examples of our product principles were:

  • Take risks. Don’t just take the easy or safe solution. Look for the stretch.
  • Aim for what sales reps can become, rather than what they are now.

The last thing we aligned on were the user transformations. They served as a high-level description for what the user is like now and what they will be after using the product. An example of one we used was:

  • Unsure about his growth and progress -> Knows what he needs to do in order to succeed

We were very intentional with this part of the project during kickoff. Others may think these exercises can be quite redundant but we communicated to the stakeholders that these exercises were a way for all of us to align possibly conflicting mental models. We may all have the same idea of what the objective may be, but differing mental models about how the user should change using the product could potentially cause problems later on during the project.

To quickly summarize, this strategy phase was really focused on: what the business outcomes should be; who was the focus of the project; and why we would make certain decisions.


Scope Phase: What do we want to accomplish and how might we do that?

With the strategy of the project refined, we worked on the scope of the project. This stage was the, ‘what are we doing,’ whereas the strategy phase answered the ‘why’.

We ran a weekly sprint cadence with clear sprint objectives and results. For example, one of our first sprint objectives was, “How might we learn what is important to Mario [the manager], and what motivates Alejandro [sales rep] towards those goals?” Our stakeholders would help us ideate on each sprint objective and signed off on the one that best aligned with the goals outlined during the ‘strategy’ phase.

Below are some artifacts from this phase. We outlined the current workflow of the target user, ideated flows for different solutions, and mapped them to higher-level strategy (OKRs, user outcomes, etc).

An early iteration of our user journey map.

We often used copies of our user journey map to highlight assumptions and gaps in understanding.

One of the first things we did as a team was draft and execute problem interviews with dozens of sales representatives and managers. The problem interviews were designed to learn about the following:

  • The breakdown of an ‘average’ day on the job
  • Who they interact with and why
  • Intrinsic and extrinsic motivations
  • To check and validate our assumptions.

This was done by asking questions, conducting card sorts, and scoring problem statements. I personally found these problem interviews to be very illuminating on both the user and our assumptions about their life and motivations.

For example, we had the interviewees sort cards that contained different motivating factors (e.g. ‘learning new things, maximizing take-home, working less hours’). We asked them to elaborate why they ranked them the way they did.

We took this information and consistently updated our proto-personas. More can be read about proto-personas in the book Lean UX by Jeff Gothelf. 

...proto-personas are based on the assumptions of the stakeholders, and further checked against actual data. They ultimately represent what we think our users are like.
— Andrew Jacobs


For each ‘problem’ that we ideated, we ran through the Sprint GV method alongside our stakeholders. If you’re not familiar with the Sprint GV method, it looks something like this:

Each day has a number of workshops and exercises. It would start with exercises like ‘Lightning demos’ (an exploration into analogous apps):

Each week would culminate into three-panel solution storyboards that we would use to build prototypes to test with:

These are three different ideas that everyone, including the stakeholders, would ‘heat map’ with what they liked and disliked.

Prototyping our ideas

This all culminated into a number of different prototypes and experiments. They were all ‘quick-and-dirty’ (some coded, some mockups) prototypes that we got directly into the hands of the users.

1. Geolocation prototype:

A mapping of  sales data and census / demographic data over their regions.

2. Customer data app:

An app that helped sales reps collect important data that assists with sales from their area.

3. Personal Goal Study:

A concierge-prototype that had the users express personal sales goals. The prototype then provided daily updates and tips.

4. Crowdsource recommendation prototype: 

A coded prototype that crowdsourced sales tips based on demographic, region, customer, and other variables (think reddit mixed with a sales pitch guide).

5. On-going problem interviews:

We continued weekly problem interviews to further test our understanding of the the sales reps and the managers. The basic cadence went like this:

  • We made assumptions that money is the most motivating factor for sales reps
  • Problem interviews would uncover that while money is important, there is a deeper motivation that providing for family is of paramount importance
  • We would go back to our porto-personas and user journeys and update to keep an evolving set of documents that were constantly tested. 

Why did we do so many? Because we wanted to make and test as many ideas in the shortest frame of time before heavily investing in a single solution. Some failed completely, some were huge hits, and some were somewhere in-between. Overall, we learned from each prototype and it culminated into the next stage of the project.



After completing the handful of prototypes, we ideated solutions with our stakeholders. This was a far more informed ideation session because we had a plethora of information about SKY Mexico’s business needs, user needs and behaviors, and the landscape.

Aliya (the other Product Designer on the team) and I had two competing ideas: a team leaderboard gamification app and a map app.

This is us digesting our research with the user journey map.

We decided to split into two ‘pods’ (each consisting of an engineer and a designer) and champion each idea.  The reasoning behind this was that we were confident in our team’s ability to quickly design, build, and iterate on these two ideas. Our stakeholder loved the idea of multiple prototypes as he saw the fruits of the 5 previous prototypes. We had sign-off for four weeks to build, test, and iterate on each idea and launch a full-fledged pilot at the end, with dozens of users. At the end of the pilot, the prototype that had the best results would be handed off to an implementation team to add it to an existing tablet app that PwC developed for SKY Mexico.

Michael (an engineer) and I took on the mapping prototype, while Aliya and George (an engineer) took on the team leaderboard app. We made a wager that the one who got green-lit after the pilot would buy the other team a round of drinks. Needless to say, we were excited.



Michael (an engineer) and I embarked on a four week sprint to create a map-based app.

I did all the visual design, research planning, and a considerable chunk of the front-end development.

Validating a map prototype

With a good understanding of user pain-points and user flows, we ideated on what value a map-based app could provide. The user test / interview at the end of the week allowed us to validate which of these potentially had the most value.

We went through all the research conducted in previous weeks to figure out the customer needs. After that, we grouped them and connected them to a feature/solution that our map prototype solved. We were developing assumptions building off learnings to get us one step closer to a solution that had significant impact.

De-risking the integration into existing workflows

We brainstormed how a map-based app could fit in the sales reps’ existing user-journey. Past research gave us insight into the sales process of our users and how important flow and repetition was. The successful sales reps had patterns that they refined over the years, and adding in a new process, even if it provided value, had to exist seamlessly within their current workflow.

We also knew from our user research that they used Google Maps extensively to navigate when they went door-to-door or set up their mobile kiosks.

These are some artifacts during the process of us trying to understand the flow and how our app can seamlessly integrate in their existing process.

Quick-and-dirty prototyping

We did countless rounds of ‘crazy-8s’ and other ideation methods to come up with as many solutions as possible within the constraints that were identified.

By Thursday, we coded up a prototype using Framer.js and Mapbox, wrote up a test script, and by Friday, we had shipped our first iteration.

This prototype was able to: simulate past route overlays; randomly map out any amount of inputted interactions we needed for each sales rep (sales, warm leads, etc.); have an interactive sidebar that connected the icons on the map with a highlighted card on the right; and most importantly, we were able to spike MapBox integration to get comfortable with the software.

Building a Framer.js/Mapbox prototype in such a quick timeframe gave us a quick feedback loop into how a map-based app could add value to our users. We discovered through the user tests and interviews that:

  • Seeing past routes were not useful at all. However, tracking warm leads, sales, and local tips from their teammates were seen as high-value-adds.
  • The users loved the idea of pinning their interactions, but also wanted to track information about the area - not just their customers. For example, a few users gave us unprompted feedback that they wanted to be able to flag areas as ‘dangerous’.


With the remaining three weeks we set out to build a version of Framer.js/Mapbox prototype into a more fleshed out app for the pilot. 

We came up with three problem assumptions that we thought our product could solve: making uninformed decisions; communicating with managers; and progress + region penetration. We mapped them on top of their existing user flows and identified how each problem was solved with an improved user journey, using our app.

I found this a helpful exercise to both distill what the main pain-points the app was addressing, as well as keeping the user (more specifically, their user-journey) always at the center of the process. We printed this and kept it on a wall to serve as a constant reminder to what we were doing and to keep us honest when we got user feedback. For example, we made an assumption that we would improve their decision-making process if they used our app to help plan for the next day. We would check this assumption when we got user feedback to make sure our mental model of our product and user was accurate.

In June 2017, we prepared a pilot test plan that would give 20 sales reps a cheap tablet with cellular connectivity in Mexico City and Guadalajara. The original plan was to do an initial two-week pilot, spend one week iterating on the feedback / bugs, and then do another two-week pilot.

We tracked usage data and did weekly user interviews with our participants. This gave us quantitative data to figure out ‘what’ was going on as well as qualitative data to figure out ‘why’. This is a snippet of the usage data from Guadalajara. Out of 10 participants, two used the app heavily - logging in almost every interaction they had.

The interviews gave us more insight into the data we were receiving. The two heavy users loved seeing all their interactions on a map so they used the app a lot. Some logged only certain types of interactions but found the ones they logged to be very helpful.

Where we learned the most was from the people who did not use our app a lot. What we found out was that:

  • The areas they were in had low connectivity, making the app difficult to use.
  • The tablet screen was not bright enough, making it difficult to use during sunny days.
  • 4 out of the 10 felt scared to use it a lot due to safety concerns. One sales rep even said he put his smartphone in his sock while canvassing.



I learned a tonne from quickly diverging and coming up with different ideas and prototypes. Taking learnings from each idea/test/interview and applying it to the next idea to get closer to a valuable product was a great experience.


At times, we followed the Sprint GV method literally by the book. What I learned is that a good designer knows how to follow different frameworks and workshops, but a great designer knows when to use them. The entire team felt that we could have tailored and adjusted the Sprint GV method with other frameworks and methodologies in our ‘designer tool belt’.

Remote Users with Differing Cultures:

We used a great UX team in Mexico (Usaria) to help us conduct our user research. However, it was still difficult to have a ‘middle-person’ to translate, conduct, and synthesize user interviews and tests. We spent a great deal of time having to work with the UX researchers to make sure they understood the test scripts and the meaning behind each question. Some questions, when translated, did not get the insights we were hoping for.

We learned through Usaria that employees in a company like SKY in Mexico were very agreeable in public and when their managers were in the room. They often did not want to ‘offend’ us or share their disagreements. This made it quite difficult to tease out their real feelings. For example, once we asked a user what they thought of the app. They replied ‘I love it, I use it everyday!’. However, we knew they only used it a few times during the pilot. We had to push Usaria to get past these surface-level pleasantries and arm them with the data to know ‘what’ was going on to learn more about the ‘why’.