Perspectives in AI: from search to robotics with Hussein Mehanna, SVP Cruise and Pear’s Aparna Sinha

On March 1st, Pear’s Aparna Sinha hosted a fireside chat with Hussein Mehanna, SVP of Engineering for AI/ML at Cruise for a discussion on next generation AI/ML technologies. Hussein has a long and deep history of innovation in machine learning engineering spanning speech recognition, language models, search, ads, and ML platforms at companies including Google, Facebook and Microsoft. He is currently focused on ML-driven robotics, especially autonomous vehicles at Cruise.

This is the first in a series of AI/ML events Pear is hosting. To hear about future events, please sign up for our newsletter and keep an eye on our events page.

The exciting conversation lasted for over an hour, but below is a summary with some highlights from the talk between Aparna and Hussein:

Q: You’ve been building products at the forefront of AI throughout your career, from search, to speech to ML platforms and now robotics and autonomous vehicles. Tell us a little bit about your journey, and the evolution of your work through these products?

A: My journey began with a scholarship for neural networks research in 2003, followed by a role at Microsoft. I eventually joined Facebook and worked on Ads that pushed the limits of ML and from there moved to a more broadened role of working with ML platforms across the company. I then joined Google Cloud’s AI team to explore disruption of enterprise through ML. I learned over the years that robotics is the biggest field facing disruption with machine learning, and autonomous vehicles is the biggest application of that. So I joined Cruise both out of interest in robotics and a pure interest in cars. 

Q. Ads, in fact, also at Google was the birthplace of a lot of the advanced AI. And now AI is absolutely into everything. 

A: Absolutely. There was a system in Google Ads called Smart ass. It was actually one of the first known large scale machine learning systems. And the person who developed them, Andrew Moore, eventually became my manager at Google Cloud AI. You’d be surprised how many lessons to be learned from building machine learning for ads that you could implement in something as advanced as autonomous vehicles.

You’d be surprised how many lessons to be learned from building machine learning for ads that you could implement in something as advanced as autonomous vehicles.

Q: We are seeing the emergence of many AI-assistive products, co-pilot for x, or auto-pilot for y. But you’ve spoken about AI-native products. Are AI-assistive products and AI-native products fundamentally different?

A: Yes, they are. An AI-native product is one that cannot exist, even in MVP form, without machine learning. Examples include autonomous vehicles or speech recognition software like Alexa. On the other hand, AI-assistive products can help humans in various ways without necessarily using machine learning. In fact, Google search, people may not know that, but Google Search started with more of a data mining approach than machine learning. 

Q: What is the gap between building an AI-assistive product versus an AI-native product?

A: The gap is huge. Building an AI-native product assumes full autonomy, while building an AI-enhanced product assumes a human being will still be involved. For example, the technology used for driver-assist features (level 1-3 autonomy) versus fully autonomous driving (level 4-5 autonomy) require vastly different approaches and parameters.  Autopilot is actually classified as Driver Assist. But then once you remove the driver completely, from behind the wheel, you get into level 4, level 5, autonomy. Level 5 is maybe less dependent on a predefined map, you could just throw the robot anywhere, and they’ll figure its way.  It’s very important for founders, entrepreneurs, product managers to understand, are they building something that assists human beings, and therefore assumes a human being, or something that completely replaces them.

Q: Where do generative AI and GPT technologies fall on the spectrum?

A: Generative AI and GPT – so far – are human-assisted technologies that require a human being to function properly. Today, they are not designed to replace humans like technologies used for level 4-5 autonomy.

Q: At a high level, what are the components and characteristics of a fully autonomous system? I’ve heard you call it an AI brain.

A: So let me project the problem first, from a very high level on driving, because I suspect most of us have driven before. For a full autonomous system the first component is perception, you need to understand the environment,  and essentially describe the environment as the here and now. This is a vehicle, it’s heading this direction, with this velocity; here’s a pedestrian, he or she is x distance away from you, and they’re heading that way, and that’s their velocity. Here’s a pile of dirt. And here’s a flying plastic bag. And here’s something that we don’t know what it is, right? So perception is extremely important. Because if you don’t understand the environment around you, you don’t know how to navigate it. 

Now, what’s very, very important about perception is that you can’t build a perception system that is 100% perfect, especially a rich system that describes all sorts of things around you. And so one of the lessons we’ve learned is, you can build multiple levels of perception. You can build a level of perception that is less fine grained. A machine learning system that just understands these two categories can generalize better. And it’s very important for your perception system to have some self awareness so that it tells you the rich system is confused about this thing here. So let’s go to the less  sophisticated system and understand whether it’s something that is safe to go through or go around. Now the reason why you need the rich system is because it gives you rich information. So you can zip through the environment faster, you can finish your task faster. And if your rich system is accurate, let’s say x percent of the time with a little bit of unsureness, then it’s okay to drive a little bit slower using the less rich, less refined system. So that’s number one about perception. 

The second component of autonomous driving is prediction, which involves understanding how agents in the environment will interact with each other. For example, predicting whether a car will cut you off or slow down based on its behavior. However, predicting the future behavior of other agents is dependent on how your car will behave, leading to an interactive loop. We’ve all been in this situation, you’re trying to cross the road, there seems to be a car coming up. If you’re assertive, very likely, in crossing the road, the car will stop. Or if they’re more assertive, you’ll probably back off. At Cruise, we no longer separate the prediction system from the maneuver planning system. We have combined them  to decide jointly on what is the future behavior of other agents and our future, to solve extremely complicated interactive scenarios, including intersections with what they call a “chicken dance” where cars inch up against each other. We now call this the “behaviors” component.

The third component is motion planning and controls, where the car starts actually executing on its planned trajectory with smoothness. This component plays a huge role in delivering a comfortable ride because it can accurately calculate the optimal braking speed that reduces jerk (or discomfort). Most of our riders feel the difference immediately compared to human driving where a human driver could pump the breakers harder than necessary. Simulation is also a critical component of autonomous driving, which is often considered only as a testing tool but is, in fact, a reverse autonomous vehicle problem. Simulation involves building other agents that behave intelligently, such as human drivers, pedestrians, and cyclists. At Cruise, we have seen massive improvements in simulation since we have taken a big chunk of our AI and Autonomous Vehicle talent and put them in simulation. The technology we are working on is generalizable and broadly applicable to any robotics problem, such as drones and robots inside warehouses. 

I like to tell people that by joining Cruise, people are building their ML-driven robotics career, which can be applied to many other places. The stack of perception, prediction, maneuvering, and simulation can be scaled to other robotics problems. Robotics is pushing AI technology to its limits as it requires reasoning, self-awareness, and better generative AI technologies.

Robotics is pushing AI technology to its limits as it requires reasoning, self-awareness, and better generative AI technologies.

Q: The concepts you described here of predicting and simulating, giving your AI system a reasoning model, and self awareness, in terms of how confident it should be. These are lacking in today’s generative AI technologies. Is this a future direction that could produce better results? 

A: I do believe robotics is going to push AI technology to its limits, because it is not acceptable that you build a robot that will do the operation 99% of the time, correct, the 1% of the time can introduce massive friction.

Generative AI is very impressive, because it sort of samples a distribution of outputs, for a task that is not extremely well defined. There’s so many degrees of freedom, it’s like, give me a painting about something. And then it produces a weird looking painting, which in reality is an error. But you’re like, Wow, this is so creative. That’s why I say generative AI and particularly chatGPT do not replace human beings, they actually require a human operator to refine it.  Now it may reduce the number of human beings needed to do a task. But it’s L3 at best.

Now, in order to build an L4 and above technology, especially if it has a sort of a massive safety component. Number one, you need various components of this technology to have some self awareness of how sure they are. And us as humans, we actually operate that way with a self awareness of uncertainty. L4 technologies are not going to be able to be certain about everything. So they have to be self aware about the uncertainty of whatever situation they’re in. And then they have to develop sort of policies to handle this uncertainty versus chance it up to tell you whatever, statistically, it’s learned without self awareness of its accuracy. 

Q: What do you think about the combination of generative AI and a human operator in various fields such as education and healthcare?

A: Using generative AI alongside a human operator can result in an incredible system. However, it’s important to be mindful of the system’s limitations and determine whether you’re creating an L3 system with more degrees of freedom or an L4 system with no human oversight. In the field of education, generative AI can be a valuable tool, but it’s crucial to acknowledge that education is a highly sensitive area. On the other hand, in healthcare, as long as a physician reviews the outcomes, there are considerable degrees of freedom.

Q: I’ve heard great reviews from riders using Cruise’s service in San Francisco. What was your experience like in a driverless ride?

A: My first driverless ride was in a Chevy Bolt vehicle with a decent sensor package on top. At first, I felt a little anxious, but quickly realized that the vehicle was an extremely cautious driver that obeyed stop signs and braked very well. The vehicle optimized the braking and turning speeds, which made me feel safe and comfortable. I have seen the same reaction from family and friends who have ridden in the vehicles.

I think that the new Origin car is amazing and looks like the future. It’s a purposely built car for autonomy with no steering wheel and has two rows of seating facing each other. I believe that it’s going to be a very different experience from the current driverless rides, as it becomes real that there’s no driving and the car is really moving itself. The feedback from multiple people who have experienced it is that it’s as big as their first driverless ride. I also think that people will love the Origin car because it’s more comfortable and cautious than any vehicle with a driver, and it looks like the future. The first version of the Origin car should be deployed this year, and I hope that many people will have the opportunity to experience it and enjoy it within the next year or two.

Q: What are some open questions and unsolved problems as we move forward in building autonomous vehicles?

A: One open question is how to move towards end-to-end learning for autonomous vehicles, which would involve creating a single, large machine learning model that takes in sensor inputs and produces control signals, rather than the current system, which is heavily componentized. Another question is how to create an equivalent to the convolutional operator, a key component in computer vision, for autonomous vehicles. This is still an early stage field that requires significant investment to develop.

Q: At Facebook, you pioneered a new approach to AI platforms that then also later permeated our work at Google Cloud. And I think it was a very meaningful contribution. Can you explain why platforms are important for machine learning productivity?

A: I pioneered a new approach to AI platforms at Facebook that focused on productivity and delivering machine learning models quickly. I believe that productivity is key for successful machine learning because it allows for quick iteration and a faster feedback loop. In my opinion, platforms are the best mechanism to deliver machine learning models quickly and make machine learning a reality.  I believe what is much more powerful than building one model that is centralized, that serves everybody is to empower everybody to build the models they want, and to tweak them, and to tune them the way they like. And that’s where a machine learning platform comes in. And I do believe that is very much true in our organization. And I’ve seen that happen at Facebook, where at one point, around 2017, we had 20% of the company, either interacting or building machine learning models one way or another.

Q: In summary, are we at an inflection point in machine learning? Can autonomous systems approaches influence responsible AI more broadly?

A: I believe that we are at an inflection point where machine learning is expected to have a massive impact on multiple fields, including autonomous vehicles, robotics, and generative AI. Robotics is pioneering this concept of reasoning and understanding the environment and incorporating it, simulating it, and building your machine learning system to be accurate enough and understand the externalities. All of it is on this foundational bedrock of having great platforms which will enable quick iteration and a faster feedback loop. 

I also believe that the advanced work happening in robotics and autonomous vehicles will influence the future of AI, potentially leading to a more holistic and safe system that is oriented towards reasoning. In my opinion, one potential impact of autonomous vehicle technology on machine learning is around responsible AI. We should have one strategy for product safety, rather than separate strategies for product safety and ML safety. As an autonomous vehicle engineer, I spend more time evaluating the effectiveness of the system than building and tuning the ML model. The ability to evaluate the system effectively will become increasingly important, and I hope that there will be a generation of ML engineers that are used to doing so.

I believe that we are at an inflection point where machine learning is expected to have a massive impact on multiple fields, including autonomous vehicles, robotics, and generative AI.

We’d like to extend our sincerest thanks to Hussein Mehanna for joining us for this insightful chat. His expertise and experience in the field provided valuable insights into the current and future states of AI/ML. We look forward to hosting more conversations on AI, so please keep an eye on our events page!

Using AI to spot investment trends: how ChatGPT surprised me and why I’m on the hunt for the next big AI startup

I recently joined Pear as a Visiting Partner focused on early-stage investments in Machine Learning / Artificial Intelligence and Developer Tools. As I kicked off this new chapter, I sought advice on how to find the next billion dollar startup. The natural place to start: ChatGPT. 

Every time I asked ChatGPT which areas in technology are likely to cause the greatest disruption in the next 5 years?”, it gave me different lists, everything from quantum computing to 5G to Biotech and medicine. But AI/ML was always on top of its list. So to get some detail on the potential of AI/ML, I asked “which problems should AI-based startups solve to maximize growth and ROI?”

Results from ChatGPT question: “which problems should AI-based startups solve to maximize growth and ROI?”

I noticed that ChatGPT picked up acronyms like ROI and expanded them correctly, and correctly interpreted queries with misspelled words (e.g., ‘distributed’ was corrected to ‘disruptive’ based on the context of the chat). That made me sit up in my chair!

ChatGPT’s responses are high level. So I tried to pin it down by asking what specific products our startup should build. It provided a specific list, but I could not get it to stack rank or put numerical values against any of these.

Results from ChatGPT question around what specific products our startup should build.

To go deeper, I picked the ‘Personal Assistants’ product idea, and asked 

1. “How would we monetize our personal assistant?” and got this reply:

Results from ChatGPT question: “How would we monetize our personal assistant?”

2. And, “how would we differentiate it relative to competition?”

Results from ChatGPT question: “How would we differentiate it relative to competition?”

These were surprisingly good answers: multimodal chatbots that are personalized and integrated with enterprise systems could be quite useful. Finally I asked ChatGPT to write the business plan and it did!  I shared this business plan with Pear’s healthcare investors and it even passed a few checks. Now if only ChatGPT could generate founders and funding, we’d be all set!

But let’s fast forward to the future, which is what Pear’s portfolio companies are building. First of all they are using AI to solve real problems, such as automating an existing workflow. For example, Osmos.io uses a form of generative AI to create data transformations in a no-code data ingestion platform that’s replacing fragile, hard to maintain hand-coded ETLs. Sellscale uses generative AI to create better marketing emails. Orby.ai discovers and automates repetitive tasks in Finance, HR, customer service, sales, and marketing. And CausalLabs.io is creating the infrastructure layer for UI that is increasingly generated, optimized and personalized by AI. 

Kirat Pandya, Co-Founder and CEO of Osmos, a Pear portfolio company that helps companies scale their customer data ingestion and drive growth with self-serve uploaders and no-code ETL

These companies are building on foundational models like GPT and expanding them to get users accurate, effective outcomes that are orders of magnitude more efficient. When a technology creates orders of magnitude better outcomes, it is a game changer.

We believe AI in 2023 is what PC’s were in 1983 or the internet was in 1995 – it will power enterprises and consumers to do things they couldn’t before and generate enormous value in the next five years. Much of it will come from startups that are in the very early stages today.

This brings me back to why I joined the fast-moving startup ecosystem at PearVC in the first place: the time is now, and the opportunity to build the future with our next billion dollar startup is here.

The Data and Analytics Playbook for Startups


Ali Baghshomali, former data analyst manager at Bird, hosted a talk with Pear on data and analytics for early stage founders. We wanted to share the key takeaways with you. You can watch the full talk here

While a lot has been said around building go to market and engineering teams, there’s not much tactical coverage for analytics teams. Yet analytics is one of the most fundamental and crucial functions in a startup as it launches and scales. 

When should you start seriously working on analytics?
Why should you work on analytics?
Who should you hire?
What should be in your analytics stack?
What are some case studies of company analytics operations?
What should you do moving forward?


When should you start seriously working on analytics? 

You should start thinking about your analytics platform when your company is nearing product launch. After your product is live, you’ll receive an influx of data (or at least some data) from customers and prospects, so you want to be prepared with the proper analytics infrastructure and team to make the most of this data to drive business growth. 

If you are just starting out and would benefit from working with analytics but don’t have much in house, consider using third party data sources, like census data. 

Why should you work on analytics? 

If done well, analytics will pay back many, many times over in time, work, money, and other resources saved as well as powerful insights uncovered that drive meaningful business growth. 

Who should you hire? 

In conversation, people often use “data scientist” and “data analyst” interchangeably. While fine for casual conversation, you should clearly understand and convey the difference when writing job postings, doing job interviews, hiring team members, and managing data teams. 

Data scientists work with predictive models through leveraging machine learning. Data analysts, in contrast, build dashboards to better display your data, analyze existing data to draw insights (not predictions), and build new tables to better organize existing data. 

For example, at Spotify, data scientists build models that recommend which songs you should listen to or add to particular playlists. Data analysts analyze data to answer questions like how many people are using the radio feature? At what frequency? 

Similarly, at Netflix, data scientists build models that power the recommendation engine, which shows you a curated dashboard of movies and TV shows you may like as soon as you log in. Data analysts would conduct data analysis to determine how long people spend on the homepage before choosing a show. 

Unless your core product is machine learning driven, you should first hire data analysts, not data scientists. In general, a good rule of thumb is to have a 3:1 ratio of data analysts to data scientists (for companies whose products are not machine learning driven). 

For early stage startups, stick to the core titles of data scientists and data analysts rather than overly specialized ones like business intelligence engineers because you’ll want someone with more flexibility and who is open and able to do a wider range of work. 

What should be in your analytics stack? 

Here are examples of tools in each part of the analytics stack and how you should evaluate options: 

  • Database: examples include BigQuery and Redshift. Analytics databases are essentially a republica of your product database but solely for analytics. In this way, you can do analytics faster without messing up product performance. In general, it is advisable to use the same database service as your cloud service. 
  • Business intelligence: examples include Looker and Tableau. Business intelligence tools help you visualize your data. They connect to your analytics database. You should pick a provider based on pricing, engineering stack compatibility, and team familiarity. Don’t just default to the most well known option. Really consider your unique needs. 
  • Product intelligence: examples include Mixpanel and Amplitude. Product intelligence tools are focused on the product itself, rather than the over business. Specifically, they are focused on the user journey. They get code snippets inserted from the product. Because they don’t encapsulate the full code, you should consider this data to be an estimate and use the insights drawn more directionally. Product intelligence tools can be used to create charts, funnels, and retention analyses, and they don’t need to be connected to other databases. 

What are some case studies of company analytics operations? 

Helping Hands Community is a COVID inspired initiative that services high risk and food insecure individuals during the pandemic. 

  • Team: 7 engineers, no data analysts
  • Product: basic with 1000 users
  • Stack: Google Cloud, Firebase for product database, BigQuery for analytics, Google Data Studio for business intelligence, and Google Analytics for product intelligence 

Bird is a last mile electric scooter rental service. 

  • Team: 50+ engineers, 30 analysts, 8 scientists, 6 analyst managers
  • Stack: AWS for cloud, Postgres (AWS) for product database, PrestoDB for analytics, Tableau and Mode for business intelligence, Mixpanel for product, Google Analytics for website, Alation for data, DataBricks for ETL, and Anodot for anomaly detection (you generally need anomaly detection when ~1 hour downtime makes a meaningful difference in your business) 

What should you do moving forward? 

Make a data roadmap just like you make business and product roadmaps. Data roadmaps are equally as important and transformative for your startup. List the top 5 questions you foresee having at each important point along this roadmap. Structure your data roadmap in a way that your stack and team addresses each of the questions at the point at which they’re asked. 

We hope this article has been helpful in laying the foundations for your analytics function. Ali is available to answer further questions regarding your analytics strategy, and he is providing analytics and data science consulting. You can find and reach him on LinkedIn here

5 Guidelines for Introducing Product Management to Your Company

This is a recap of our discussion with Nikhyl Singhal, VP of Product at Facebook, former CPO at Credit Karma, and former Director of Product Management at Google. 

Watch the full talk at pear.vc/events and RSVP for the next!

Product management can be an elusive topic, especially as its definition changes as the company grows. Early on, product management is focused on helping the company get to product market fit. Once the company achieves it, product management can change dramatically depending on the type of product or service, the organizational structure, and the company’s priorities. We brought Nikhyl Singhal to demystify the product management process and share insights on when, how and why to add product management into your company.

Jump to a section:

In the “Drunken Walk” Phase, Product Managers Should Really Be Project Managers

For Founders Working on Product Market Fit, Maintain Healthy Naivete

If You’re Not a Product Person, Find a Co-Founder Who Can Own Product Market Fit

Introduce Product Management When Founders Shift Priorities

Look for Product Managers Who Can Scale with the Organization


In the “Drunken Walk” Phase, Product Managers Should Really Be Project Managers

While employees at early stage companies may have Product Manager as their title, they should really be owning project management and execution.

Product management, or the goal of helping that company get to product market fit, should be owned by the founders. 

It’s partially an incentive problem. Founders, as Singhal notes, are usually the executives with a larger share of ownership.

“They’re the ones that the investors have really placed money in and the extended team in some ways just aren’t at the same level in scale as the founders,” Singhal says.

However, execution and distribution are team responsibilities–and Singhal considers them much more of a utility than a strategic function. Understanding the allocation of responsibilities in founders versus product managers in early stage companies can be crucial to success.

“I actually embrace this and I [would] suggest, “Look, there’s no shame in saying that we need to bring in product managers to really own a lot of the execution.”

For Founders Working on Product Market Fit, Maintain Healthy Naivete

For early stage founders, Singhal says not to discount naivete. He recounts from his own experience that while others had insider perspectives or felt jaded, his own beliefs helped propel him through company building, ultimately helping him found three companies in online commerce, SAAS, and voice services. 

“I think that the lesson, if I were to pick one, is that healthy naivete is a critical element to finding product fit and actually fortitude around some of those ideas that are, ‘Hey, the world should work this way,’” Singhal reflects. “‘I don’t quite understand the industry, but I want to focus on that user, that business problem, and go forward on it.’”

If You’re Not a Product Person, Find a Co-Founder Who Can Own Product Market Fit

“The speed to be able to go through product fit is so essential for being able to efficiently get to the destination in the final course of action for the company,” Singhal says.

Thus, while it’s possible for founders to take other roles early on, purely outsourcing product fit to the rest of the team is still not a wise decision.

“If you’re not the person who’s owning product fit and you agree that product fit is sort of job number one, what I would say is—find a co-founder who can be essentially the owner of product fit. The reason why I use the term co-founder is for the economics to work.”

Introduce Product Management When Founders Shift Priorities

One issue founders often face with product management is determining when to introduce it. Introducing it too early may lead to conflicts internally, while introducing it too late means the company may have missed out on the prime time for strengthening execution. 

Again, product management is dependent on the founders’ backgrounds. For founders who have backgrounds in product, as long as there is clarity and predictability around what will happen, the company may proceed without product managers. The most common case for introducing product management, however, is when founder priorities need to shift from product fit to scaling the organization.

“This could be establishing new functions,” Singhal notes, “Or fundraising or thinking through acquisition. Marketing is also an important area, or thinking through company culture if the company starts to scale. At this point, if you fail to bring in product management, you’ll see dramatic reductions in efficiency.”

Look for Product Managers Who Can Scale with the Organization

For early product manager hires, companies should consider both the growth curve of the company and the growth point of the individual. Especially for companies that may be in hypergrowth, it’s important to have a mindset that “what’s gotten us here isn’t what gets us there.” This means the product management team must be adaptable. 

Being aware of how product management interacts with other functions is also crucial. 

“Product tends to essentially sit between the power functions of the organization as it deals with scale and growth,” Singhal says. It could be between marketing analytics and product engineering, or sales and product, depending on what the company’s business model is. 

Lastly, founders need to examine their own career trajectories in transitioning product power to teammates. It can be a tough emotional decision, Singhal acknowledges, but this question should be asked early on.

“I think that it’s almost a psychological question around: what is the person’s career ambition as a founder? Do they see themselves as moving into a traditional executive role? Shall I call it CEO or head of sales or head of product? If the goal of the person is to expand beyond product, then I think that the question really deserves quite a bit of weight,” Singhal says.

15 Mistakes Startups Make When Building Their First Engineering Teams

This is a recap of our discussion with Pedram Keyani, former Director of Engineering at Facebook and Uber, and our newest Visiting Partner. Keep an eye out for Pedram’s upcoming tactical guide diving deeper into these concepts.

Watch the full talk at pear.vc/speakers and RSVP for the next!

Mistake #1: Not Prioritizing Your Hires

The first mistake managers encounter in the hiring process is not prioritizing hires. Often, when faced with building a company’s first team, managers tend to hire for generalists. While this is a fine principle, managers must still identify what the most critical thing to be built first is.

“The biggest challenge that I see a lot of teams make is they don’t prioritize their hires, which means they’re not thinking about: what do they need to build? What is the most critical thing that they need to build?”

Mistake #2: Ignoring Hustle, Energy, and Optimism

People naturally prefer pedigreed engineers — engineers that have worked at a FAANG company, for example, or engineers that have built and shipped significant products. But for young companies that might not have established a reputation yet, they’re more likely to attract new college grads.

“They’re not going to know how to do some of the things that an engineer who’s been in the industry for a while will do, but oftentimes what they have is something that gets beaten out of people. They have this energy, they have this optimism. If you get a staff engineer that’s spent their entire career at—name-your-company—they know how to do things a particular way. And they’re more inclined to saying no to any new idea than they are to saying yes.”

So don’t worry too much about getting that senior staff engineer from Google. Often, bright-eyed, optimistic young engineers just out of school work well too. 

Mistake #3: Not Understanding Your Hiring Funnel

Managers must be aware of how their hiring funnels are laid out. No matter what size of company or what role, a hiring manager must treat recruiting like their job and be a willing partner to their recruiters.

Get involved as early as sourcing. 

“If they’re having a hard time, for example, getting people to respond back to their LinkedIn or their emails, help put in a teaser like, ‘Our director or VP of this would love to talk to you.’ If that person has some name recognition, you’re much more likely to get an initial response back. That can really fundamentally change the outcomes that you get.”

Mistake #4: Not Planning Interviews

Once a candidate gets past the resume screen to interviews, that process should be properly planned. Interviewing is both a time commitment from the candidate and from the company’s engineering team. Each part of the process must be intentional. 

For phone screens, a frequent mistake is having inexperienced engineers conduct them. 

“You want the people who are doing the phone screens to really be experienced and have good kinds of instincts around what makes a good engineer.”

For interviews, Pedram suggests teams have at least two different sessions on coding and at least one more session on culture. 

To train interviewers, a company can either have new interviewers shadow experienced interviewers or experienced interviewers reverse shadow new interviewers to make sure they’re asking the right questions and getting the right answers down.

Mistake #5: Lowering Your Standards

Early companies can encounter hiring crunches. At this time, hiring managers might decide to lower their standards in order to increase headcounts. However, this can be extremely dangerous. 

“You make this trade off, when you hire a B-level person for your company—that person forever is the highest bar that you’re going to be able to achieve at scale for hiring because B people know other B people and C people.”

What about the trade-off between shipping a product and hiring a less qualified teammate? Just kill the idea. 

“At the end of the day, these are people you’re going to be working with every day.”

Mistake #6: Ignoring Your Instincts

Failure #5 ties into Failure #6: Ignoring your instincts. If there’s a gut feeling that your candidate won’t be a good fit, you should trust it. 

“The worst thing you can do is fire someone early on because your team is going to be suffering from it. They’re going to have questions. They’re going to think, ‘Oh, are we doing layoffs? Am I going to be the next person?’” 

Mistake #7: Hiring Brilliant Jerks

During the hiring process, managers may also encounter “Brilliant Jerks.” These are the candidates that seem genius, but may be arrogant. They might not listen, they might become defensive when criticized, or they might be overbearing. 

The danger of hiring brilliant jerks is that they’ll often shut down others’ ideas, can become huge HR liabilities, and won’t be able to collaborate well within a team environment at all. 

So when hiring, one of the most important qualities to look out for is a sense that “this is someone that I could give feedback to, or I have a sense that I could give feedback to you.”

Mistake #8: Giving Titles Too Early

Startups tend to give titles early on. A startup might make their first engineering hire and call them CTO, but there are a lot of pitfalls that come with this.

“Make sure that you’re thoughtful about what your company is going to look like maybe a year or two year, five years from now. If you’re successful, your five person thing is going to be a 500,000 person company.” 

Can your CTO, who has managed a five person team effectively, now manage a 500,000 person team?  

Instead of crazy titles, provide paths to advancement instead. 

“Give people roles that let them stretch themselves, that let them exert responsibility and take on responsibility and let them earn those crazy titles over time.”

Mistake #9: Overselling The Good Stuff 

When a team’s already locked in their final candidates, young companies might be incentivized to oversell themselves to candidates—after all, it’s hard to compete against offers from FAANG these days. But transparency is always the best way to go. 

“You need to tell a realistic story about what your company is about. What are the challenges you’re facing? What are the good things? What are the bad things? Don’t catfish candidates. You may be the most compelling sales person in the world, and you can get them to sign your offer and join you, but if you’re completely off base about what the work environment is like a weekend, a month, and six months in, at some point, they’ll realize that you are completely bullshitting them.”

As Director of Engineering at Facebook, Pedram made sure to put this into practice. After mentioning the positives and perks of the job, he would follow up with “By the way, it’s very likely that on a Friday night at 9:00 PM, we’re going to have a crazy spam attack. We’re going to have some kind of a vulnerability come up. My team, we work harder than a lot of other teams. We work crazy hours. We work on the weekends, we work during holidays because that’s when shit hits the fan for us. I wouldn’t have it any other way, but it’s hard. So if you’re looking for a regular nine to five thing, this is not your team.” 

Make sure to set expectations for the candidate before they commit. 

Mistake #10: Focusing on the Financial Upside

Don’t sell a candidate on money as their primary motivation during this process. 

“If the key selling point you have to your potential candidate is that you’re going to make them a millionaire you’ve already lost.”

Instead, develop an environment and culture about a mission. Highlight that “if we create value for the world, we’ll get some of that back.”

Mistake #11: Getting Your Ratios Wrong

Companies want to make sure that they have the right ratio of engineering managers to engineers. Each company might define their ratios differently, but it’s important to always keep a ratio in mind and keep teams flexible.

Mistake #12: Not Worrying About Onboarding

Once a candidate signs on, the onboarding process must be smooth and well-planned. Every six months, Pedram would go through his company’s current onboarding process himself, pretending to be a new hire. This allowed him to iterate and make sure onboarding was always up to date. 

“It’s also a great opportunity for you to make sure that all of your documentation for getting engineers up to speed is living documentation as well.”

Mistake #13: Not Focusing on Culture

Culture should underscore every part of the hiring process. It can be hard to define, but here are some questions to start: 

  • How does your team work? 
  • How does your team solve problems? 
  • How does your team deal with ambiguity? 
  • How does your team resolve conflicts? 
  • How does your team think about transparency and openness? 

“Culture is something that everyone likes to talk about, but it really just boils down to those hard moments.”

Mistake #14: Never Reorganizing

Failure #14 and #15 really go hand in hand. As many companies grow, they may forget to reorganize. 

“You need to shuffle people around. Make sure you have the right blend of people on a particular team. You have the right experiences on a team.” 

Again, keep your ratios in mind.  

Mistake #15: Never Firing Anyone

Lastly, and possibly the hardest part of hiring, companies need to learn to let people go. 

“People have their sweet spot. Some people just don’t scale beyond a 20 person company. And, you know, keeping them around is not fair to them and not fair to your company.”

Transforming Healthcare with Tech

This is a recap of our discussion with Lindsay Kriger, Director of Transformation and Business Operations, Andrew Smith, Chief Operations and Innovation Officer, and Dr. Bobbie Kumar, a family physician, from Vituity, the largest physician-owner partnership in the U.S.

Watch the full talk at pear.vc/speakers and RSVP for the next!

Healthcare innovation is local
To get started, pick an area to focus on
Recommendations for new healthcare solutions

Healthcare innovation is local

Though tackling the American healthcare system seems extremely daunting, the most important rule as a healthcare founder is to remember to keep the patients’ and the communities’ needs centered. Like any startup founder, healthcare providers need to know their users.

“The reality of healthcare innovation is that it’s local,” said Kriger, Vituity’s Director of Transformation and Business Operations. “We need to talk to real people from diverse backgrounds, geographies, cultures, and clinical care settings and build companies from passion and personal experience. That will ultimately create the healthcare that we all want.”

Often, this means bringing care directly to the patient, especially in the COVID era, when patients may be wary of coming to hospitals.

“There are many sick people in this country that aren’t coming to the hospital or asking anyone for help, so taking care of people at home, or in their grandparents’ house, or at an SNF, or anywhere they are and reaching out to them has become incredibly important,” explains Smith.

To get started, pick an area to focus on

Our healthcare system is a $6 trillion problem. We spend close to 18% of the GDP on healthcare every year and over $10,000 per capita—twice of what any other industrialized country spends.

Naturally, numerous technical solutions and innovations have emerged in this space. Because of how vast the market is, however, it can be difficult to figure out how to start. 

This is where balance comes in. After thoroughly understanding your users, pick one area to focus on. 

“If you try to solve all of the problems of healthcare in one swoop, it is going to be extremely difficult,” Smith emphasizes.

Smith underscores the importance of iteration and building feedback cycles from diverse sets of users. 

“With whatever you’re rolling out, you need to hear from patients and providers–and not just in one location. One location is a great place to start, but one of my recommendations is having a diverse set of pilots.”

Again, because of how local healthcare is, understanding the specific needs of markets you are targeting will help you build the best solutions. 

At scale, however, Vituity firmly believes that all users and all markets should be serviced. 

“We’re going to take care of everyone equally, always. And we want to build solutions and products that work for the entire community in this country.”

Recommendations for new healthcare solutions

Dr. Bobbie Kumar, Director of Clinical Innovation at Vituity, thinks a key area for innovation in the healthcare space is physician productivity.

When Dr. Kumar was an intern, she had 45 minutes to meet each patient. As a doctor, she only gets 10-15 minutes to do the same amount of work. 

“Reducing the administrative burden that myself and the care team have to experience is going to be a very key and poignant feature in how solutions are able to penetrate the healthcare space,” she added.

Beyond that, she leaves two other recommendations for innovating solutions within the healthcare industry. 

“Disrupt the industry, not the mission,” she said. “The top two reasons for choosing medicine are help people, followed by intellectual pursuit— wanting to find something that’s challenging enough for the knowledge base and the research.”

The last recommendation? “Preserve the humanism.” Technological solutions simply cannot account for the degree of human interaction inherent in physician-patient relationships. 

“Until we really start looking at healthcare as both patient-centric and provider and care team supported, we’re just going to end up missing the target. So, my hope for the future is that we disrupt the status quo and we model these solutions that not just promote, but really emphasize the value and the unique human experience.”

From Software Engineer to CTO

Cathy Polinsky, former CTO at Stitch Fix, chatted this week with Pear associate Harris, who leads our talent initiatives. This is a recap! Watch the full talk at pear.vc/speakers.

It starts as an obsession. You play with your first coding language, and you’re hooked. You keep learning new languages and building cool things with your new skills. You’re doing this in an exciting environment with enthusiastic mentors who support you.

Cathy Polinsky, former CTO at Stitch Fix, was lucky enough to grow up this way. In the 80’s, Cathy’s mother was a teacher, and Apple personal computers had arrived at school. Cathy’s mother thought the technology was interesting and bought one for their home (before they had a VCR!).

Cathay learned her first programming language, Logo, and went on to learn basic programming. Computer science was so new then that there wasn’t formal education for it in school, but growing up in Pittsburgh meant that Cathy was able to go to a summer camp at Carnegie Mellon.

“We did algorithms in the morning and more hands-on programming in the afternoon with robotics and maze solving problems, and I just really got hooked and knew I wanted to study it after high school,” said Cathy.

Sure enough, Cathy did just that, and went on to a thriving computer engineering career. In this fireside chat with Pear, she looks back at her career and highlights the building blocks she gained from each role to help future engineers make decisions about their own careers.

“A career is made in hindsight, not while you’re on that path. While you’re on it, it might not feel straight, but you can in retrospect see this arc of where you’ve come from. I’d say I have a little bit more of a clear arc than most, but I know many other people who have had different paths that have gotten there or to some other destination that they have been really excited about.”

Laying the groundwork as a software engineer scaling early Amazon
Hands-on, non-stop building at an early stage startup
Moving into management and people skills at Yahoo and beyond
Cathy’s Career Advice
Cathy’s Advice On Hiring
Tactical Hiring Tips

Laying the groundwork as a software engineer scaling early Amazon

Like many young graduates in computer science today, Cathy’s first job out of school was as a software engineer at a fast-growing company — Amazon, in 1999, right before the dotcom crash.

“I thought I’d missed the interesting times at Amazon. I thought I’d missed the early stage startup. Little did I know, I was living the interesting times. It was a gritty time. I was there during the dotcom collapse and all my stock was underwater,” Cathy recalls.

Things were still early enough for Cathy to grow fast with the company (she had a door desk when she started!). Amazon was in the hundreds of engineers then, and everyone was brilliant, but the rapid pace meant that the environment was a bit chaotic. The code base was a mess for example, with little documentation.

“It was an amazing time where I learned how to be an engineer being in a company that was scaling so fast. I learned a lot about how to hire people earlier than any other job I would have gotten,” says Cathy.

At Amazon, Cathay had the opportunity to learn from the tech giants. She developed solid, rigorous ground for her career learning to ship things at scale and having to think through everything from interview process to database and schema design, to hardware constraints and automation and testing.

But in addition to engineering skills, Amazon helped Cathy to gain a deep appreciation for the growth mindset — a mindset that serves her even today.

“Jeff [Bezos] never wanted to be written in an article and have Amazon in the same sentence with any other early stage dotcom company. He only wanted to see himself with the giant companies that were out there. I’d say that I was much more of a voice of growth and opportunity as a leader at Stitch Fix in the sense of: ‘How can we invest more, and how can we grow more?’ and making sure that we were keeping our eyes always on the long-term.”

Hands-on, non-stop building at an early stage startup

Cathy soon got the startup itch and wanted to see what early startups were like, so she went and joined a 13-person startup in San Mateo.

At first, with only two years of work experience at a scaling company, the early startup environment felt quite different. Cathy was surprised to learn that there were no automation suites and other big tools she took for granted at Amazon.

“But, I wrote way more code there than any other place, and I got to wear more hats and I got to go to a pilot customer and be involved when we were doing discussions about name changes and logos,” says Cathy. “You got to interact with people that weren’t just tech and be much more involved in the company on a different scale .”

Cathy also experienced the pain of building great products that didn’t work out.

“The company never really got off the ground. We only had two pilot customers, couldn’t get sales. There was a recession going on. It was heartbreaking when we closed the door to feel like I put so much of my blood, sweat, and tears into building something that I was really proud of.”

It was valuable first-hand experience in understanding that a successful company was not just about building a great product, but also about building the right thing and checking in with the market for early feedback.

Moving into management and people skills at Yahoo and beyond

After the startup heartbreak, Cathy turned back to big company world, though she “wouldn’t recommend to anyone to overcompensate that strongly,” she laughs.

At Oracle, Cathy realized she missed the fast pace of earlier companies, and sought to move into management. A friend pointed her to a position opening up at Yahoo.

Cathy ended up being offered her choice between two roles there — an engineering management role and an engineering lead role. She decided to try management and never looked back, going on to lead engineering at Salesforce after Yahoo, and on to the C-suite at Stitch Fix.

One of the first lessons Cathy learned in her first management role at Yahoo was to stay out of the code. The engineering manager she inherited the role from said that she regretted being so hands-on and jumping in too quickly to fix bugs or help her team with the solution.

“I really took that to heart. If I’m jumping in to heroically save the day, I’m not actually working on the fundamental issues that are going to help the team be successful in the long run. That has really influenced how I spend my time and how I look at the role,” says Cathy.

That is, transitioning into a management role means focusing your attention more on the people issues rather than the tech issues.

“I’d say that I choke often — that I would have been better off being a psychology major than a computer science major,” Cathay laughs. “Dealing with people in some of these large organizations is much more complex, and not deterministic in any way. You never know if you’re doing it the right way or not. I think that I’ve spent more sleepless nights thinking about people issues than I have about technology issues.”

In all the companies Cathy has been in, it’s been key to treat every single tech production incident as a learning opportunity.

That’s because if you’re shipping software, it’s inevitable that you are going to break something eventually. So, the question software engineers should be thinking about isn’t “How do we avoid breaking things?” but rather, “How do you make sure you don’t have the same problem happen again, and how do you make sure that you learn from it and get better and have other people around you learn from it, so they don’t have the same mistake that you had?”

Cathay is a fan of blameless postmortems.

“We get in a room and we do a postmortem of what happened, but the only rule is you can’t point fingers at anyone. You don’t name any names. You’re not trying to get anyone in trouble. If you can really approach any problem with that open mindset and learning, then you will make sure that you uncover all the problems in the areas and you won’t have the same problems again.”

Cathy’s Career Advice

  • There’s no wrong answer to big tech vs. join a startup vs. start my own company.
  • Take the roles where you’re going to learn the most.
  • Follow your passions.
  • If you are interested in something that’s different than what you’re doing today, just tell people that. People will see you differently and think of you when opportunities arise when you tell people what you’re passionate about.
  • Don’t worry about where you’re going to be 10 years from now, but have an idea of what you want to do next.
  • If you don’t know what you want to do next, talk to a lot of people.

Cathy’s Advice On Hiring

Anytime you can, hire someone smarter than you.

A lot of people have a hard time doing that. They think, “Oh, they have more experience, or they’re smarter than I am, what am I needed for?” You will never ever fail if you were hiring people that are smarter than you and know more than you.

Extend outside your network and find people who are going to push you.

I have worked in a lot of companies that focus on hiring within your network and telling the great track record they have for internal referrals. As a woman engineer, I think that’s really dangerous. I think that you have only a specific network of people and if you continue to hire people in your network, you’re only going to see people who look exactly like you and you’re not going to push yourself and get diverse opinions around the table. You’d be better off really trying to extend outside your network and finding people who are going to push you and bring different experiences to the table than you have in your own. That’s something that we were really intentional about at Stitch Fix — making sure that we were reaching out into diverse networks and seeing people that were different than the people that we already had.

Follow the structure of what you set out to do — don’t rush.

One of the challenging hires was not someone who worked directly for me, but was one level down, and was a referral from someone in the organization. We did a rush job hiring and leaned on the referral, but really did not show up living the values that we had as a company. We started to see some things that didn’t show up the way that we would expect for someone of their level. When we did a debrief on what happened, we realized that we hadn’t done reference checks and we hadn’t really done a full check on: ‘Is this person a good fit to the values that we had?’ It was a pretty big miss that we hadn’t followed the structure of what we had set out to do and really caused a lot of friction across the team because of it.

It’s critical to understand what you need for the stage of your company.

In the early days, Amazon would rather hire someone that had a couple of amazing interviews and a couple of bad interviews than someone who was mediocre across the board. They wanted to see someone who really excelled in a specific area, even if it meant they had holes in other areas. I like that sense of playing to someone’s strengths, especially as a larger company, you can take more liberties in that way.

It’s harder in an early stage company… you have funding for three hires. You’re going to need to hire some real generalists who like to build things, who can also answer phones and do hiring and operations and the like. Thinking about those first hires as generalists who are interested in getting to wear a lot of different hats is important. It’s kind of fun for the people who do it, to get to build things from the early stage.

Then you have to think about how you scale it. Because at some point, those people are probably not going to be happy after you get to a team of 100 where they’re not the best at any of those things. Either they scale, and they still know everything and are this amazing person that can jump in anywhere and add value, or they get really dissatisfied that they can’t really play the same role that they had before.

As you’re scaling things out, you’re hiring more narrow generalists of, “Hey, we really need someone who understands AWS deployments, or we need someone who really understands this mobile technology stack,” or whatever it might be.

So, if you’re really thinking about building and scaling with the business, as the business scales, you have to think about what stage you’re in and know that what works before is not going to be the thing that’s going to be successful after you get to 50 or you get to 100.

Tactical Hiring Tips

  • Take home interview questions and pair-programming interview questions are a good way to see what people are going to do in their day to day. You don’t work by yourself — you work in a team and so seeing how people work with a team is good.
  • Have candidates interview with non-technical team members and solve a problem together. It’s important for technical people to be able to talk with nontechnical folks. At Stitch Fix, this practice has enabled a much higher EQ team.
  • Have inclusive language in your job description and list fewer requirements to be more welcoming to women. Avoiding bro-like or gendered language is of course the obvious thing to do, but what might be less obvious is that long requirements can rule out certain populations of people! Men will tend to apply anyway despite not meeting all requirements, and women will tend to filter themselves out because they don’t have nearly enough of the requirements on this list.
  • Sit in on every debrief. You don’t necessarily need to meet every candidate, but you should listen carefully to candidate discussions and guide your team. “There were times where someone would explain the interview and then have a strong assessment on whether we should hire the person or not. I would listen to their rationale, but not agree with the outcome based on their assessment, so we could talk about that and dig in. Sometimes there were some things that changed the way that they thought about the interview, for example, something like, “They weren’t very confident in their skills.” We would ask, “Why did you think that?” And it could just be a passive word choice that they used, and someone else points out, “They’re actually really confident over here — and is that really a job requirement that you need?”

Design Thinking Q&A with Bob Baxley

As we move into the last days of summer, with the world still very much uncertain, we highly recommend giving Bob Baxley’s Pear talk on conditions, constraints, and convictions a watch at pear.vc/speakers.

Bob shares three Cold War stories of magnificent innovation — reminders of the amazing things humans have accomplished and can still accomplish even in trying times.

Below, find insights from the Q&A session for thinking about conditions, constraints, and convictions for our own times.

When are constraints good for creativity, and when are they bad?
What can founders do to help creative teammates be most creative?
How do you figure out which problems to solve and which to defer?
What signals can you look for to indicate that you need to engage the creative side more, versus getting it done?
How do you find inspiration for design during the initial brainstorming phases?
How can we think about current pandemic conditions of not being able to see each other?

Q: Constraints may have boosted Dr. Seuss’s creativity, but it seems that a lot of conventional wisdom also suggests that having too many constraints can be bad for creativity — how do you reconcile this?

I think the question is getting at two different kinds of constraints. One is a resourcing constraint and the other one is defining the problem space. Trying to do too much with too few people is a different challenge. That’s really a question of prioritization, so I’ll leave that aside.

As a creative professional and somebody who’s been designing software products and leading teams for three decades, the worst thing you can do is to give me a lot of space and a lot of time. Every creative I know, we’re master procrastinators. We will always put off finishing the thing. If we have a big, giant open field, we don’t quite know how to get started. I’m a big believer in a well-defined problem space.

I like making sure that you have to show progress on a steady basis. With my teams at Apple and now ThoughtSpot, the team has to show me work basically every 48 hours. We are constantly looking at work. It keeps people from getting too caught up in their own heads and wandering off in the wrong direction, and it keeps us together as a team. It reduces a lot the emotional, creative pressure on people.

Structure’s the difference between an empty field and a baseball diamond. You can’t have a game if you don’t have the diamond. You can’t make progress if you don’t have the diamond. You’ve got to have some boundaries to make progress and a ton of amazing art.

If you really go back and you look at Chopin’s Preludes are Bach’s Goldberg Variations, there’s a ton of different, great artistic achievements that were set up as a technical problem that the artist was trying to solve. They go into the work that way. Most artists don’t just show up at the canvas and see where it goes — like they’re trying to accomplish something.

Q: What can we, the founders, do to help our creative teammates be most creative and set the right constraints without setting too many?

In all designed problems, what you’re trying to do is help the person to understand the problem. Don’t get prescriptive about the solutions. Try to make sure we’re trying to accomplish the same goal, and then evaluate the design solution in the context of the goal.

A story I tell is: at a company I was working at years ago, we had a homepage. There was a link on the page, and the product manager said “Bob, you have to make that link blue.”

I said, “Why do we have to make a blue?” They said, “Just make it blue.”

We got in this debate about it being blue. I said, “No, I don’t want it. That’s going to ruin the aesthetic of the page.” And they said, “Well, I want it” and went behind my back and made it blue.

Then we had a heated discussion in the hallway. I eventually said, “What was it you were trying to accomplish by making it blue?”

Turns out, people couldn’t see it. And I said, “Oh, so it wasn’t prominent enough?” And the product manager said, “Yeah. It wasn’t prominent enough.”

I said, “Okay, great. Well, we’re professional graphic designers. We have 15 ways to make it more prominent, one of which is making it blue. What you did is you leapt to a solution, and you didn’t really present the problem to us.”

For founders and executives, I spend most of the conversation trying to get them to more specifically narrow down our understanding of the problem and to make sure that we’re on the same page with the problem. The benefit of doing that is that when you find the right solution, everybody will agree.

I often get questions about how to resolve creative conflicts between different design solutions. I always say — that means you haven’t agreed on the problem. Because if you had a shared understanding of the problem, the solution would be obvious.

Q: How do you and your team tend to figure out which problems to solve and which ones to defer or ignore?

Here’s another story, about my friend Steve.

When Steve was getting ready to work on Keynote for the first time, Steve went to Roger Rosner, who was the VP of engineering responsible for iWork. Roger asked, “How should we think about Keynote? We got to compete with PowerPoint. There’s already these other presentation packages. What should we do?”

Steve said, “Roger, there’s three things with Keynote. One is it should be really difficult to make ugly presentations. Two, you should focus on beautiful cinematic transitions. And three, you should optimize for innovation over PowerPoint compatibility.”

I use those as examples of strong tenets. Each one of those statements, for example, if we just take the last one, optimize for innovation over PowerPoint compatibility — you could imagine the thousands of hours of debate that the product team would have had if they didn’t have clarity on which side of that debate to be on.

Steve just said, straight off the bat, make it difficult to make ugly presentations. That’s a whole set of things to take out of the way. So, we get beautiful cinematic transitions, which to this day is the hallmark of Keynote and why I can’t stand using Google Slides or PowerPoint. That’s a very clear direction.

I think a lot of times when you’re trying to figure out what to do, do you really have clarity around the tenets? Because if you have that clarity, the priorities fall out much quicker.

Q: What signals do you look for in your own work that indicate you need to engage on the creative side more deeply versus buckling down and getting it done?

This is an interesting question we’ve been dealing with internally at ThoughtSpot. When does something pivot from being exploratory creative work to execution mode? It’s always a delicate balance, and it’s a little bit like being a parent. When do you give your kid that extra thing to go do? When do you let them go further, versus when do you stay closer to them?

There is a point with almost every project where the team has some sense of, “No, this is really what we want to go do.” I can’t give you a rubric for how to judge that, but you can usually feel it in the room. The debates are becoming more and more myopic and you don’t seem to be making the same level of progress. There seems to be general agreement that this is the thing we want to go do. At that point, you have to pivot into execution mode. There’s a point where you just have to sit down and finish the thing.

As a creative, I enjoy working in both those modes, because it’s emotionally draining to sit in that level of uncertainty of wondering what we’re doing. There’s a point when it’s actually really rewarding to put all that uncertainty behind you and just go and finish the dang thing.

We tend to do that with deadlines. We sign up for some sort of commitment. Maybe there’s a company event that you have to have a demo ready to show. Maybe you have to give a presentation like this. You sign up for different kinds of commitments, and then you can use that commitment as a forcing function.

Q: How do you find inspiration for design during your initial brainstorming phases?

In terms of software, I spend a lot of time just looking around in the world. I’m the annoying guy at the deli that asks to look at the point of sales system. How does the Clover UI actually work? I sit and chat with people about self-checkout at Safeway and how it sucks and why it sucks. There are software products everywhere, so there’s tons of inspiration to be found just looking around the normal world.

Q: We have a lot of new constraints these days — one of which is that we’re not with each other. What are your thoughts on our current conditions, and where do you see it going?

I’m on the side that says there’s no great reason to go back to the office. I always try to get my team to separate the idea of working from anywhere from the idea of the lockdown. Imagine your kids are still going to school, and imagine you can go work at the Starbucks, and you can go see your friends for lunch. You can work from Hawaii if you want. How does that feel?

Most people say, “Oh my God, that feels awesome” — so, it’s not the working from home thing, it’s the lockdown. The lockdown is not something within our control, as individual companies or individual citizens. So you have to parse those two.

I firmly believe that we can make this work, and I think there are some unique benefits to working from anywhere. The fact that we can do things like this company events with a few hundred people from around the globe — we wouldn’t have done that in the past. If we had, it would have been a huge deal, and it would cost us an enormous amount of money.

You’ve got to dance with the constraints, and stop thinking about what you’ve lost by not being in the office, and start thinking about what you’ve gained — because you’ve gained a lot.

Can You Describe—Exactly—the Problem You’re Solving?

This is an excerpt from partner Nils Bunger’s talk for Pear Accelerator S20.

Pear Accelerator is a small-batch program, where our partners and mentors work hands-on with exceptional founders through the journey to product-market fit. Learn more: pear.vc/pearx

The tricky thing about finding product-market fit is that it’s easy to be misled (and to convince yourself) by false signals. As Ajay mentions in his talk, the worst case scenario is when customers are lukewarm about your product. They will seem excited about it and talk about the features they want, but when push comes to shove, they won’t buy the product, no matter how many features you add at their request.

The hard truth is: that’s because they don’t actually need or love it. If this is happening to you, you haven’t found product-market fit.

Again, from Ajay’s talk, if your users love your product, they’ll tell you.

So how can you get to that point of love? How can you set yourself up for success and avoid getting caught in the feature spiral of death — where you’re building and shipping but no one is buying?

Customer development. Or, verifying your insight before building your MVP. This talk is about how you do that, step by step.

Jump to a section:

The Mindset

How to Form A True Problem Hypothesis

How to Win Your First Validation Meetings

How to Extract Real Insight From Your Meetings

Tying It All Together

OMG It’s Starting to Work!


The Mindset

“Fall in love with the problem, not the solution.”

— Uri Levine, co-founder of Waze

You first have to understand that customer development is not sales. You’re not selling your product yet. You’re not trying to convince potential customers that you have the correct solution.

Customer development is anthropology. You’re studying your customers. You’re trying to answer the question — do these people actually have a problem? How do they describe it? What would it take to solve it? You’re trying to probe and get real data to confirm whether your insight about a solution is correct.

And to do that, you need to deeply understand the problem your customers have.

Prepare to spend a lot of time here, going in a circle from hypothesis to validation back to hypothesis over and over again. It might feel frustrating, but it’s far better to be stuck in this loop, learning about your problem, rather than being stuck in the product feature loop where you’re wasting time, money, and energy building things that don’t work. You want to stay in the customer development loop until you achieve repeatable sales or clear cut metrics that say that you’re onto something.

Form A True Problem Hypothesis

A problem hypothesis has these basic parts:

  • Problem: A concise, specific statement of the problem you solve
  • Audience: Focused set of people who you think desperately have this problem
  • Reasoning: What makes this problem something that people in the audience need solved?

The problem is “the what.” The two key points for a well-articulated problem statement is that it is (1) specific to something that is solvable and (2) in your customer’s language. If you don’t have either of these, you don’t actually know what the problem is.

For the audience, the most important thing is to be narrow and tight — who has the problem most acutely? Keep narrowing down your ideas until you have defined a concentrated pool of users with the most acute need.

For any amateur chefs out there, you can think of this like a reduction sauce — start with a big pot of some kind of juice and stir over the stove for many hours, slowly evaporating all the water and slowly concentrating the flavor of that juice. What’s left behind is the deep essence of the ingredients.

You want to be finding your group of customers with the deepest, most desperate need for a solution.

Finally, you need to double check yourself and make sure you have sound reasoning for your hypothesis. Do you know why your audience wants or needs that problem solved? Again, specificity is the key here.

If you don’t have a good idea of why people might need a problem solved (perhaps your reasoning is a bit circular — “this customer has this pain and they want it solved, because it’s painful”), it might not really be a problem in the first place. There are plenty of problems people have that they’re okay with tolerating, because solving them is more effort than it’s worth. Or, you’ve just made up a problem that doesn’t exist.

You’re looking for an urgent problem, or if you’re on the consumer side, you’re looking for people who are just itching for something new.

Win Your First Validation Meetings

Once you’ve got a solid hypothesis, you have to validate it. That means you need to collect data from unbiased people who don’t know you, which means you will need to lean into cold outreach.

While customer development isn’t sales, during this phase, you will need to put on your shameless salesperson hat a bit to get the meetings you need.

There’s no single answer to how you should reach out. This is actually a core part of your learning during this whole process. You will need to know the unique answer for your company: how do you reach the right people and what are you saying that activates them?

Your approach should have three key things:

  • A high response rate
  • Consists of your target audience
  • Generates a steady volume of meetings

Then, it’s really a numbers game. Aim for 10 first meetings per week, and then aim to keep increasing that number with iteration. Iterate your message. Iterate your audience. Keep reaching out.

You’re learning about your customers here and you’re also learning about your message, where your customers hang out, and what they respond to. These insights are just as valuable as the meetings themselves.

Extract Real Insight From Your Meetings

Alright! You finally get to talk to customers! But what do you say? How do you get good data from them? Nils offers this simple section structure:

  • First 10 minutes: Broad questions to learn the unexpected.
  • Middle 10 minutes: Specific questions. Learn about your problem statement.
  • Last 10 minutes: Reconcile and zoom out. Did what you hear in parts 1 and 2 match up? Why or why not?

Broad Questions Phase

During the broad questions phase, your goal is to learn context about the user and the general area of your company. Find out about the incentives and biggest problems on their mind before being influenced by your ideas.

Some example questions:

  • What do you do here?
  • Who else do you work with most closely?
  • How do you spend most of your time?
  • What’s the most important thing for you to accomplish?
  • What’s the biggest challenge you have in your job now?
  • What are the top 3 problems you face?
  • Have you bought products to help some of these problems?

Do not tell them about your product idea, or what problem you’re after in this phase. Just try to understand how your prospect thinks about their day and what they need to accomplish.

During this phase, because your questions are broad, you’ll likely get some broad answers in response that won’t be very actionable for you. Make sure to spend some time drilling down into these answers. Pick one of the problems they talk about that seem relevant to you and ask for more details, peel back the onion a bit.

Specific Questions Phase

Your goal in this phase is to gain data points specifically about your problem hypothesis. Does this person have the problem you came up with? How badly do they have that problem?

Some example questions, and what you’re really after in asking them:

  • Have you ever had <your problem hypothesis>? Tell me about that.
  • How do you deal with it now?
    → Why You’re Asking: If this is a real problem, they’re probably doing something to deal with it already. And if they’re not, it’ll be useful for you to find out why.
  • How painful is this problem for you? How often do you have it?
    → Why You’re Asking: If this doesn’t come up that often, then it’ll probably not be a priority for them to buy or find a product about it.
  • Have you ever looked for a solution to this problem? Have you bought anything?
    → Why You’re Asking: If they have this supposed problem but then haven’t thought about it enough to try doing a simple Google search for solutions, maybe they just don’t care about it all that much.
  • Would <direct competitor> solve your problem? Why haven’t you bought it?
    → Why You’re Asking: Don’t be scared of this one. Remember, you’re not trying to sell a product, and in any case, you will have to assume in the future that your customer is going to know about your competitors—so you might as well ask them about it for your own competitive intelligence.
  • If you could solve this problem, what would change?
    → Why You’re Asking: Why does solving this problem really matter to this customer? Why does it make their life 10x better to not have to deal with it? This is what your product is going to need to solve for.
  • How valuable is that change?
    → Why You’re Asking: Value is much better than talking about pricing. You’re now trying to get as close as you can to a quantified version of the answer to the previous question. Does it cost them a lot of time? Money? Does it allow them to save on not hiring extra people?

Reconcile and Zoom Out

Now you want to reconcile what the customer has said in both the broad and specific phases to double check that your data is valid. If your customer answered your specific questions in a way that makes it seem like you’re onto something, but didn’t bring the problem up on their own in the broad phase, you’ll really want to dig in and understand why. Maybe the pain isn’t as intense as you (or they) think it is, or maybe they think of the problem you came up with more as a piece towards solving those larger problems.

Finally, you’ll want to zoom out and understand how your customer buys products or solutions to their problems. Ask:

  • What would be the next step if you had a product that solves this problem today?
  • Do you actually buy these kinds of products?
  • If you tried such a product, what would you want to see to keep using it?

You want to try to make this part as concrete as possible for the customer, almost as if it really does exist. Walk through their new customer journey with this product in their life. You’re looking for that “WOW” moment, as described in Bob Tinker’s talk.

Tying It All Together

Do at least 5 user interviews with the same type of audience you outlined. If you’re not finding an acute, concentrated pain, go back to your problem hypothesis and revise either your audience, or your problem, and run through it all again.

If you are starting to find an acute pain, do 5 more user interviews and drill down to the next level of questions. Show some product mockups and see if the pattern you’ve been seeing holds up.

Ask about discrepancies between your interviews. For example, if the previous four of the five previous customers said something was extremely important, but your next five don’t mention it, just ask them about it — “These other three guys had this big issue around XYZ and that didn’t seem to come up here. Is there a reason? I’m curious about the differences between what you do and what they do.”

OMG, it’s starting to work!

The strangest thing happens when this process starts working: you’ll find yourself trying to have a customer development “anthropology” conversation, and your customer is trying to turn it into a sales conversation.

People start leading you, instead of you leading them. They want to buy this thing now, they want to know how they can try it out, they want to bring their colleagues to a meeting. It’s the WOW moment. If you can get it repeatedly with 3–5 customers, then it may be time to develop your MVP!

Two options for this phase:

  1. Create a “no-code” solution with phone / email / spreadsheets / whatever, and try selling the solution to a few.
  2. Try turning 3 of your most promising interviews into “design partners” to help co-create a solution. Try to get some written agreement in place.

In any case, we always recommend building the tiniest MVP you can, enough to go through a “build/measure/learn” loop, per Eric Ries.

We wish you luck and hope that all of you get to this magic moment! It’s our favorite part of the journey.

Setting Goals to Get to Product Love

This is an excerpt from Ash Rust’s talk for Pear Accelerator S20.

Pear Accelerator is a small-batch program, where our partners and mentors work hands-on with exceptional founders through the journey to product-market fit. Learn more: pear.vc/pearx


First things first: the most important thing over and above everything else that you can do to get to product love is to launch and get live. Live, as in: people outside of your team can use this product end to end, and at least one person has done so.

Once you’ve gotten live though, you need to “Test, learn, observe and iterate,” as discussed in our Product Market Fit lesson. After all, you’re aiming to make something that customers want to rely on and use fanatically. The only way you can get there is to learn from customer engagement. To learn most efficiently, you need to set effective goals for yourself.

Here’s a systematic process for doing so, from long time startup mentor, Ash Rust.

Jump to section:

Determine your North Star metric and supporting metrics

Set numeric, specific, and achievable goals

Assign owners to each goal

Start goal setting as early as possible


Determine your North Star metric and supporting metrics (no more than 3!)

First, you’ll want to determine your North Star metric. This is the metric your team will rally around.

Some examples: if you’re a SaaS business, your metric is usually some form of revenue. If you’re a marketplace, you might think about transaction volume. If you’re a pure consumer business aiming to monetize with ads, then you will likely consider something like daily active users.

While it’s very important to pick the right metric, you’ll also have to accept that it won’t be perfect. Since one metric won’t tell the whole story, you’re allowed to go one step further — but don’t go much further than that. Three supporting metrics is more than enough, and you don’t need an expensive analytics suite to calculate them.

Ash recommends breaking your business down into three sections, with a corresponding chosen metric for each: distribution, engagement, and churn.

Set numeric, specific, and achievable goals

To get your metrics moving in the right direction, you’ll need to break them further down into their own components, and set goals around these components. Meeting these component goals should ultimately tier up to moving your metric toward the North Star goal.

To get started increasing revenue, for example, you’ll probably need to start generating some leads. To start generating some leads, you’ll need to start interviewing customers to get a sense of what they want, so you’ll need to start scheduling some initial meetings. Set your first goals around these tasks.

It’s key that all your goals are numeric, specific, and achievable.

For example, rather than having a goal such as “interview customers,” you’ll want to go a little bit further and say, “How many customers do we want to interview?” Is it 5? Is it 10–15?

Achievability is also important — if your goals are too outlandish, your team won’t take them seriously. Don’t set a goal for a $100 million contract to be signed this week. Set a goal that is much more reasonable that you can track against, perhaps something like “schedule first meeting with big wishlist client.”

However, don’t go too easy either. You still want to make sure the goal is impactful for your North Star metrics. For example, “schedule 5 meetings” is more of an impactful goal than “send 20 cold emails”, as it gets you closer to increasing that North Star metric of revenue.

Assign owners to each goal

Once you’ve set your numeric, specific, and achievable goals, it’s time to build your roadmap to achieving them. Assign owners to each goal and provide resources for them to complete it within a reasonable deadline.

At the early stage, the timeline shouldn’t be too far out, since things are changing so rapidly. Quarterly goals will not make sense. Ash recommends starting out with 3–5 goals every two weeks, but you’ll definitely need to adjust depending on your business. If you are a consumer web tool, for example, you might find you’ll need to set weekly goals, because you need lots of feedback to iterate as fast as you can. On the other hand, an enterprise company may need to use monthly goals, due to fewer customers and long sales cycles.

Every goal absolutely needs an owner. You cannot have more than one owner for a goal, because it diffuses accountability. You can’t afford to waste time playing blame games instead of understanding immediately why that goal was missed. Remember—rapid learning is this stage is everything. If you’re having disagreements about who should be the owner of a goal, then alternate between different weeks (or whatever your timeline is).

The owner of the goal should be the one responsible for coming up with a plan to achieve that goal, complete with an estimate of resources that they’ll need. The manager will be responsible for providing those resources.

All of this creates clear accountability for all parties, which is crucial to iterating efficiently.

Start goal setting as early as possible

Final point: you want to start these processes as early as possible, even if your team consists of just you and your cofounder. If you don’t hold yourself accountable, you allow it to become the cultural norm for your future team. It gives your future team members the idea that certain people or groups are subject to exceptions from the achieving goals.

Once you’re growing, the easiest way to instill a goal-setting mindset is to allow your team to set their own goals. As long as they are specific, numeric, and achievable,you shouldn’t have a problem. Even if their first goals are extremely easy or not quite impactful, that’s okay. You can help them iterate and refine their goals as time progresses. The key is to get everyone into the habit of doing it.

This goes for yourself too! While we give much advice here, the key thing in the end is to get going. In the same vein as your product, the most important thing to do is get liveBe thoughtful about your goals, but don’t get paralyzed. Take your best shot at setting some goals, and iterate from there.