How 3 PearX founders raised $3M+ each

Early-stage company building poses a unique set of challenges, and we’ve created what we believe to be the best accelerator out there for pre-seed founders. PearX is our exclusive, small batch, 14 week program. PearX alums have extremely high likelihood of startup success. In fact, 90% of our companies go on to raise a successful round from top tier investors and have cumulatively raised $2B in venture capital.

But don’t take our word for it – we spoke with 3 recent PearX alums about their experiences going through PearX, and what sets it apart from other programs:

Bobyard (PearX S23):

Bobyard automates the construction takeoff process with CV and NLP models to make cost estimates 10x faster while eliminating mistakes. After Demo Day, Bobyard raised a $3.5M round from Primary.

EarthXYZ (PearX S23):

EarthXYZ makes imaging hardware & analytics software to create & process the highest resolution hyperspectral data, using ML & genAI to deliver insights. They’re in the final stages of closing a sizable round; the only capital before this was their PearX check. 

Advex (PearX S23):

Advex works on synthetic data for vision, solving the biggest bottleneck for applied machine learning. After Demo Day, Advex raised $3M from Construct Capital, Emerson Collective, and Pear.

Answers have been edited for clarity and brevity. 

What did you learn from PearX? 

Bobyard: I learned what good looks like. I’m a solo founder and a first-time founder. I didn’t know how fast a team could move until I entered PearX. The structure of the cohort, working with companies that are in similar places as you, pushes you to go further. 

I also learned how important it is to deeply understand your customer and the value prop of your product. Ajay and Sean constantly pushed me to dive deeper into the customer psyche, discovering where my product could be worth a lot more.

Earth XYZ: During the accelerator, we nailed down customer discovery, understanding what our customers want from our technology and ironing out what we needed to build. After Demo Day, we got a ton of new customers — we juggled between supporting the customers while continuing to build on the technical side. 

Advex: During PearX, we went from having a potential customer to really fleshing out what our V0 was going to look like. We closed a real paying customer and since then, we’ve closed several more customers, raised the seed round, and grown our team to 5 people. 

What sets the Pear team apart? 

Bobyard: They have all bases covered. Pear knows the problems that most startups deal with, regardless of the vertical or the industry. 

Pear’s an order of magnitude smaller than the other accelerator programs out there, which means you get more than twice the attention, and you get support from a partner for each of the major categories. 

Earth XYZ: When I talk about Pear, the first thing that comes to mind is resources. I’ve heard friends who have gone through other accelerators where they receive a random firehose of generalistic knowledge. At Pear, I talked to dedicated partners 1:1 on specific strategies. 

How has Pear helped your company the most? 

Bobyard: I’d never fundraised before. Demo Day is a really special experience, but without structure and guidance towards preparing your presentation, it’s very difficult to pull off. 

The next thing is hiring. It’s pretty difficult to convince someone that’s better than you to work for you, but hiring a great team is so important to the company. I worked with Nate after the accelerator to hire the first founding engineers at Bobyard, and all of them turned out to be amazing. He helped me with the job descriptions, the interview rounds, what we should look for… it’s a lot of knowledge that accrues over many years of hiring for startups. 

Advex: As we’re now a seed stage company, we’ve really been leveraging the hiring help from Pear and it’s been a huge help. They’ve helped us understand how the job description should evolve over time, ensuring that we’re hiring the right person that we need for whatever the business needs are at the time. 

Describe Pear in 3 words:

Bobyard: 

Home— I spent all my waking hours in the Pear office during PearX. 

Support— the team believed in my vision and took a chance on me. 

Family— I work out of the Pear SF office now, and every day I see people from the Pear team. 

Advex: 

Grounding— they’ve helped us understand why our customers are reacting the way they are, given their previous experience. 

Flexible — helping us get what we need, whether it’s product or hiring. 

Friendly— they help us think about the past and the future, working on improving at all times and making good decisions moving forward. 

Any advice for early-stage founders? 

Bobyard: You should apply. When you do apply, don’t force yourself to be a founder you are not or a company you are not. Build conviction on the thing you’re building. 

Earth XYZ: If you’re a founder like me with deep tech, hard tech, or non-traditional software based companies, don’t be turned away from applying to PearX. Beyond software, Pear has a ton of expertise, and partners bring in their own network and resources. Early-stage building is company-agnostic in many ways, and the Pear team has a deep bench of expertise. 


Applications for PearX are due Wednesday, May 1. Apply today!

Welcoming Ana Leyva to the team!

Pear is thrilled to welcome Ana Leyva to our go-to-market team! Ana’s journey is a testament to her passion for the startup ecosystem, with past roles at tech unicorns Box, ServiceTitan, and Vanta. We’re so excited to have her onboard as part of our GTM team, offering her first-hand experience as a seasoned operator. 

In her short time with us, Ana has already supported over 30 Pear companies, strategizing with them on all things GTM. This includes many critical steps of sales like: nailing ICP, prospecting, customer discovery, messaging, and more. 

A Bay Area native, Ana is a first-generation college graduate from Princeton University and holds an MBA/MA in Education from Stanford. After graduating from Princeton, she joined Box pre-IPO at Series E. At Box, she was hooked on startup culture, especially the GTM and commercial arms. Following Box, she was an early sales hire at ServiceTitan and then, while at Stanford GSB, worked with Vanta to hone their early sales motion. Following business school, Ana became a founder and CEO herself with her ed-tech startup Lelu. Ana embodies the entrepreneurial spirit that Pear champions. 

“From early on, I saw both Mar and Pejman champion founders in a way that was genuine and authentic. That authenticity and commitment to being helpful is the backbone of Pear’s culture and makes it stand out in the sea of VCs.”

Ana was a Pear Fellow while at Stanford GSB, and we’re so excited to have her with us full time. She hosts Winning Wednesdays, a bi-weekly webinar series on GTM topics, and will continue to build out Pear’s GTM programming. 
Interested in connecting with Ana? Email her at ana@pear.vc.

Debunking common myths: what technical founders need to know about sales

As a technical founder, you’re no stranger to challenging assumptions and pushing boundaries. Yet, when it comes to sales, many technical founders fall prey to common myths and misconceptions about sales that can hinder their startup’s growth. Let’s debunk these myths and uncover the truth about sales:

1. Myth: A Good Product Will Sell Itself, so I don’t need to worry about sales: Especially from a technical point of view, it’s easy to believe that if you’ve built a great product, customers will come flocking. While a strong product is undoubtedly essential, it’s not enough to guarantee success. In reality, ideas are cheap, and execution is critical. Competitors will inevitably emerge, and without a solid sales strategy, your product will get lost in the noise– even if it’s better than the alternatives. Early sales are crucial for validating your idea, learning how to talk about it and gaining traction in the market. 

2. Myth: I Can’t Sell Because I’m Not “Salesy”: Many technical founders shy away from sales, believing it’s a skill reserved for natural-born salespeople. However, this couldn’t be further from the truth. In Geoffrey Moore’s “Crossing the Chasm,” he highlights the importance of early sales to tech enthusiasts – a group that technical founders are uniquely positioned to identify with and sell to. Your deep understanding of the product and its technical intricacies can be a powerful asset in connecting with early adopters. Sales isn’t about being pushy or overly charismatic; it’s about building relationships and solving problems. 

3. Myth: Sales Is a Necessary Evil: Some technical founders view sales as a necessary evil – something to be delegated while they focus on building a product-centered company. However, the best businesses understand that success lies in being customer-centric from the get-go. Sales and technical teams should work hand in hand throughout the lifetime of the company, both driven by a shared commitment to delivering value to customers. Interweaving the shared success of both teams early on fosters collaboration and ensures that the customer remains at the heart of every decision. Ultimately, the most successful companies recognize and appreciate the unique contributions that both technical and sales teams bring to the table.

This image has an empty alt attribute; its file name is Startup-success-requires-the-founder-to-be-the-bridge-between-the-technical-and-sales-sides-of-the-org-Medium-Banner-US-Landscape-1024x512.png

Technical founders must challenge traditional myths about the sales profession and recognize the pivotal role that sales will play in their startup’s success. A great product is just the beginning – it’s how you sell it that sets you apart. Embrace sales as a strategic imperative, leverage your technical expertise to connect with early adopters, and build a customer-centric organization from day one. By doing so, you’ll pave the way for sustainable growth and lasting success.

Want to know more about leading GTM as a technical founder? Check out our additional resources for technical founders here

Welcoming Hannah Berke to Pear!

We’re excited to announce that Hannah Berke joined Pear a few months ago as part of our Community + Operations team! A seasoned community leader, we can’t wait to have her on board, especially after our recent Pear Studio SF expansion. 

Born and raised in Chattanooga, Tennessee, Hannah was introduced to the Bay Area as an undergraduate at Stanford. As an undergrad, Hannah worked in technology integration and strategic communications at Amazon and MongoDB. She was also deeply involved in university life, serving in a number of different university offices, including the Board of Trustees, and community groups. “I learned the impact that a strong community can have on someone’s success— and that I was super passionate about creating those types of environments for people.” 

Hannah first encountered Pear through a classmate and friend who participated in PearX. She’d been hooked on startup culture ever since arriving at Stanford, and Pear’s founder-first approach stood out among VCs— “the mentality of giving before getting”. At Pear, Hannah now supports this goal, creating the community support founders need to build the best companies possible. 

“Mar and Pejman’s vision for a best-in-class ecosystem and community of tech builders is amazing, and I’m honored I get to be a part of bringing it to life.”

We’re thrilled to have Hannah on board. You can reach her at hannah@pear.vc.

Minimum Fundable Team: how early team shapes seed fundraising

My role at Pear is to directly support pre-seed founders that participate in PearX with their hiring needs. PearX is our hands-on, 14-week bootcamp designed to position founders to raise seed rounds from top tier investors. We’re experts at helping companies raise this capital; over 90% of PearX companies go on to raise capital from top investors.

We have found, that at early stage, the four largest themes driving an investor’s decision to invest in a company are:

  • Market
  • Product
  • Traction
  • Team

To successfully raise capital, you need all four to be great OR one or two to be exceptional

However, at pre-seed in particular, markets are difficult to size and product or traction are often still too early to measure with a high degree of conviction. For these reasons, investors will often place an outsized emphasis on the quality and completeness of the team when making an investment decision.

At Pear, we refer to the completeness of a team at this stage as Minimum Fundable Team or MFT.

Prior to raising a pre-seed or seed round, founders should ensure their MFT is a competitive advantage. We suggest that all founders ask the following three simple questions to determine the completeness of their team prior to raising:

1. Is someone on the team a deep subject matter expert with the market or product you’re building?
2. Has someone on the team built a successful product from zero? 
3. Do you have the right mix of skills across the team required to ship a quality product quickly? 

If the answer to any of the questions above is no, what steps need to be taken to fill in any gaps to achieve MFT? 

We believe that hiring is one of the best ways to do this quickly. 

One of the advantages of joining PearX is that helping founders achieve MFT is a core part of our offering. Over the last 12 months, I have worked with 30 different teams and helped fill over 25 roles. Each of these hires have played a critical role in helping those teams reach MFT and close a successful round of funding. 

If you take away one learning from this article, it’s that hiring plays one of the most critical roles in early stage fundraising. Founders who achieve MFT prior to fundraising will have a higher likelihood of success compared to those who don’t.

PearX: join the ranks of elite founders

The most elite founders walk through walls to build category-defining companies. They are unique individuals, who don’t come by very often. Having seeded and helped build companies like DoorDash, Dropbox, Vanta, Aurora Solar, Gusto, Guardant Health and Affinity from their earliest days, we know what these founders look like and what they need. We have developed PearX to partner and support the very best entrepreneurs at the very beginning of their journey to help them get to the next stage. 

We believe small cohorts of less than 20 teams give PearX companies a disproportionate advantage. As a result, 90% of our PearX alums go on to secure funding following our Demo Day. Pear’s unparalleled resources lead to these unrivaled results. Here are some of the things we offer to each PearX company.

  • Capital to build your company, your way: We invest between $250k and $2M in all PearX companies. We know that some founders only need a small amount of capital to ideate and other teams are in more cost-intensive verticals that require more funding. All companies are unique so we’ll work with you. 
  • Founder to founder: Work 1:1 with a partner who has been in your shoes and knows your industry. Our team has started and sold 10 companies to the likes of Cisco, Instacart, Plaid, and Zynga.
  • Join the best community: Entrepreneurship doesn’t have to be a lonely journey. Joining PearX means joining a community of like-minded founders. We kick off each cohort with Camp Pear: a 3-day retreat for the entire cohort to come together, learn key company building tactics, and get to know one another. Not only will you work alongside your PearX batch for 14+ weeks, but you’ll also have the wider PearX alumni network to lean on. 
  • Access Pear Studio: Everyone in PearX receives dedicated office space in Pear Studio SF, our 30,000 square foot state-of-the-art office space with standing desks, conference rooms, phone booths, and more. This space is completely free to you for the first 12 months.
  • Build a scalable sales motion: Our go-to-market team, Pepe and Ana, will guide you through the critical steps of sales: nailing ICP, prospecting, customer discovery, messaging, and more. 
  • Recruit the best talent: Our dedicated PearX Recruiter, Nate, will find your founding engineer, co-founder, or whatever critical hire your team needs. In the last two cohorts, Nate has hired 25 people for our PearX companies. Nate leads the full cycle of recruiting for your team – from sourcing to closing candidates. This is an unprecedented level of support for an accelerator, but that’s how much we believe that hiring impacts company building. Last batch, Nate hired an average of two people per PearX company.
  • Fundraise strategically: We help you raise additional capital when you’re ready. From perfecting the story and creating a pitch deck to creating a target investor list and negotiating and closing your round. In fact, 90% of companies that go through PearX raise capital from institutional investors.

Are you interested in joining our PearX S24 cohort? We’re actively looking for our next batch of founders, and we’d love to hear from you. Please apply at pear.vc/pearx!

Pear Biotech Bench to Business: insights on tackling solid tumors and navigating company creation with Shelley Force Aldred

Here at Pear, we specialize in backing companies at the pre-seed and seed stages, and we work closely with our founders to bring their breakthrough ideas, technologies, and businesses from 0 to 1. Because we are passionate about the journey from bench to business, we created this series to share stories from leaders in biotech and academia and to highlight the real-world impact of emerging life sciences research and technologies. This post was written by Pear PhD Fellow Sarah Jones.

Today, we’re excited to share insights from our discussion with Dr. Shelley Force Aldred, CEO and co-founder of Rondo Therapeutics. Shelley is a serial founder and prominent figure in the antibody drug development space. 

More about Shelley:

Shelley earned a Ph.D. in genetics from Stanford where she worked on the human genome and ENCODE projects in the lab of Rick Myers. She spun her first company SwitchGear Genomics out of Stanford in 2006 with a grad school colleague who has since become her long-term business partner. After selling SwitchGear in 2013, Shelley shifted her focus from producing genomics tools to developing therapeutics: she helped build TeneoBio from the ground up, leading preclinical development of the company’s T-cell engager platform for treating liquid tumors, a platform that has generated $1.5 billion in upfront payments to date from multiple big pharmas. Shelley then moved on to start yet another company, Rondo Therapeutics, where she currently serves as CEO. There, she leads a team that develops innovative therapeutic antibodies for the treatment of solid tumors.

If you prefer listening, here’s a link to the recording!

Key takeaways:

1. The therapeutic window in immuno-oncology is narrow: tuning the immune system in the case of solid tumor treatment can be like playing with fire. To overcome this, Rondo has focused on using bispecific antibodies to find the ‘Goldilocks’ zone between efficacy and toxicity.

  • Previously at Teneobio, Shelley spearheaded efforts in preclinical development of immune cell engaging antibodies for liquid tumors. However, the lessons they learned and the molecules they developed couldn’t make a dent in solid tumors. Wanting to attack this problem head-on, Shelley and her long-time colleague Nathan Trinklein made the decision to start Rondo Therapeutics. 
  • One characteristic behavior of solid tumors that makes them particularly difficult to treat is their ability to trick the immune system into thinking they aren’t a threat. To combat this, Rondo is creating immuno-oncological therapies that can re-activate the immune cells that reside in the tumor microenvironment. 
  • Rondo’s efforts have been focused on the development of bispecific antibodies which are Y-shaped molecules with two arms that each can grab on and bind to different substrates. Rondo engineers these antibodies so that one arm recognizes and binds to proteins on the tumor cells while the other arm grabs onto immune cells. This brings the cells into close proximity so that the immune cells can recognize and kill the cancer cells. 
  • Shelley noted that other strategies such as checkpoint inhibitors and antibody drug conjugates often lack efficacy in solid tumors. In addition, CAR-T and other cell therapies have shown some promising preliminary results, but they can’t be administered in an off-the-shelf manner and are difficult to scale up.

Where we felt like we fit is as an off-the-shelf solution to driving tumor and immune cell engagement in a way that’s targeted specifically to the location of the tumor and isn’t body wide.

  • However, modulating the immune response is no easy feat. If pushed too far, the immune cells can start to attack healthy cells and tissues elsewhere in the both. Rondo’s cutting-edge bispecific antibodies ‘thread the needle’ and strike a balance between sparing healthy cells and killing tumor cells. Shelley noted that Rondo has been making steady progress in preclinical development and plans to be in the clinic in 2025. 

I think within other kinds of immune cell-engaging bispecifics, what we have a reputation for and are really good at is tuning and finding the Goldilocks zone. So, we’re going to be best in class in terms of this therapeutic window.

2. The ability to pivot and change directions is critical; one of Shelley’s strengths is her ability to follow the science and rely on the advice of her team. 

  • Being a founding member of three companies is quite an accomplishment. Shelley explained that joining a new company and growing it from the ground up is simultaneously an incredible opportunity and a ‘trial by fire.’ One of the advantages of working in a small start-up is the chance to take on roles that you might not otherwise have access to. 

[In a smaller company], you get to see more pieces than you would in a larger company. Inherently, when you’re in a group of only 10 or 20 people, there’s so much more visibility into what’s happening in other groups or in other people’s responsibility spheres. It’s really hard to get that in a larger company.

  • Each member within a smaller team has more responsibility in guiding the company and achieving critical milestones. Early founders and employees have to wear a lot of different hats to solve problems and ultimately push the company forward.
  • For example, Shelley noted that she had to spend a lot of time thinking not only about the science, but also about choosing the right targets and indications to pursue.

Part of that is staying humble and realizing you might not always be choosing the right targets on the first pass. We do high-throughput genomic space discovery, and so we always have a lot of targets in the mix; we have our lead program, but we also have backup programs. Particularly in this field, targets can go cold really quickly, depending on clinical results that are coming out from other companies.

  • Shelley also emphasized the importance of finding those who are willing to ride the roller-coaster with you. Bringing in experts and team members with different strengths can help keep the company agile. There are many reasons why a pivot might be necessary, and it is important to be willing to follow the science and the market. 
  • Even at Switchgear, the first company Shelley founded, an early pivot led to their eventual success and acquisition. In the initial view for the company, they anticipated having a small number of customers placing very large orders. However, it turned out that the market was asking that they have thousands of customers, each placing small orders through an e-commerce platform. By being willing to change their vision, the company ended up being extremely successful.

3. Your team is your most valuable resource, especially early on in company creation, and it’s important to surround yourself with a supportive community and a team you fully trust. 

  • It’s not a coincidence that Shelley found herself working alongside Nathan Trinklein at three of her companies – Switchgear, Teneobio, and Rondo. After running operations at Switchgear and overseeing its acquisition, the pair found themselves wanting to transition to therapeutics to get closer to patients and into a bigger market. 

Company building is a heavy lift, and being able to do that with someone with great capability that I trust deeply has increased my enjoyment of doing this quite a bit. But I also think it’s increased our likelihood of success at every step: we both have deep respect for each other and enough confidence that we just push on each other all the time. I mean, there’s constant pressure testing of ideas and conclusions. And I think what comes out the other side is always better than it would have been if only one of our brains was attacking it.

  • Another important relationship that needs to be established early on is between the founding team and the investor syndicate. Ideally, early-stage companies will have the opportunity to choose investors that support their long-term vision for the company – though the funding environment will likely determine exactly how much of a choice a founder has.
  • At Rondo, Shelley prioritized investors who were deeply versed in therapeutics and understood the relevant risks and timelines for milestones. Biotech tends to move at a slower pace, and finding firms that understand this can make a huge difference in the long run.

I am grateful for my current funding syndicate at Rondo… they are all really experienced therapeutics investors, and this is important because it means they have realistic expectations about what we’re going to be able to achieve on what timelines and what amount of capital this is going to take. It also means that their deep expertise and their networks help support us quite a bit.

4. It’s not a secret that starting a company is hard, but Shelley highlights a few ways she stays motivated. Explaining that she prioritizes bringing high quality talent into her companies, she says it’s a good thing to ‘feel a little bit stupid, at least once a day.’ Being challenged to do better and learn more is one of her favorite things about the job. 

  • Looking back over the summation of her experience as an operator, CEO, and co-founder, Shelley acknowledged that she grew and learned a lot about herself, “realizing with the exquisite mix of joy and pain that is starting a company, it was indeed the right place for me.” 
  • Her motivation comes from keeping herself always on the steep phase of the learning curve. Instead of focusing on what she doesn’t know or isn’t able to do, she pushes herself to learn constantly from her team and to bring in people who are experts in their roles. 
  • Not only does Shelley spend a lot of time recruiting and finding good employees, she also spends a considerable amount of time finding community within the broader biotech ecosystem to help keep her motivated.

We all fight different battles in our professional careers… and my network of other entrepreneurs has kept me afloat during a fundraising process when you’ve gotten a 50th no and you’re not sure you can get up and do it again. Talking to someone who’s been there and says, ‘I know you can get up to pitch number 51,’ is really important.

  • Shelley also explained the importance of finding people with shared experiences who can support you. While she enjoys challenging herself and pushing her limits, she also gives credit to her network and support system for keeping her grounded. For example, she regularly meets with her group of women biotech CEOs, and she’s found a sisterhood of women through her HiPower women’s group where she serves as an executive member.

These groups of women have been life-saving, sanity-saving in 100 different ways: primarily because they know exactly what it feels like to operate in shoes just like mine. So whatever battles you are fighting, finding people who have fought similar battles is really important.

5. Not all technical founders make great, long-term CEOs. However, with commitment to leadership development and a willingness to learn on the go, Shelley has shown that it’s possible to grow into the role and successfully lead a company.

  • Depending on the company structure, size, sector, and investor preference, there are a number of reasons founder-CEOs may not stay at the helm of their company long-term. However, Shelley has shown that technical founders, with the right experiences and mindset, can be effective leaders. 
  • While Shelley did admit to having a propensity for leadership growing up, she has made the conscious decision to invest in her own development: she reads countless books, has an executive coach, a therapist, and a willingness to apologize for her mistakes. Her humility and compassion also foster a positive team culture at Rondo. 

What I’ve tried to do is own those mistakes, apologize, … and do better later. I think that people are really wonderful and supportive when you say, ‘you’re in a learning process, you can have some compassion for me, as I’m learning to be a better leader. Just like I will have compassion for you.’ It’s like you’re learning to do your job better, and it’s really opened the gates to excellent feedback.

  • Before stepping into her role as CEO, Shelley gained invaluable experience in operational positions as a founding team member of Switchgear and Teneobio. When asked about her decision to take the CEO role at Rondo, Shelley explained that she had always known she wanted to get a chance at the job and put herself in a position to take that opportunity. 
  • When it comes to particular skills that have helped her to be an effective CEO, she explains that her strength isn’t in one particular area she excels at; instead, she is a jack of all trades.

I’m not off the charts in any one particular area, I think my advantage is that I’m good at a lot of things. So like I said, I’m a good scientist, I’m a good operator, I’m good at managing finances, I have a pretty natural sense of managing people, and I can kind of put all these things together. I think I also am good at synthesizing information that I get from a lot of different places, I’m willing to make hard decisions, and I have a pretty high risk tolerance.

Getting to know Shelley:

In her free time, Shelley likes to travel and is a voracious reader, with a particular affinity for mystery or detective novels or historical fiction. One thing people are surprised to learn about her is that she can drive a boat much better than she can drive a car because she grew up water skiing and boating regularly with her family. 

Some advice she would give someone looking to follow in her career footsteps would be to create your own opportunities and to not waste time being miserable in your work. She noted that even in your best jobs, not every day will be a good one. However, she said if you regularly wake up and dread going to work, it’s okay to look for something else. 

Pear portfolio company BioAge Labs announces oversubscribed $170M Series D financing

Last week, Pear portfolio company BioAge Labs announced its $170M Series D round led by Sofinnova Investments, with participation from a strong syndicate of new investors including Longitude Capital, RA Capital, OrbiMed Advisors, RTW Investments, Eli Lilly, and Amgen, among others, in addition to many existing investors. 

To mark this occasion, we wanted to share more about Pear’s partnership with BioAge Labs and its co-founders, Kristen Fortney (CEO) and Eric Morgen (COO). 

Pear’s founders, Pejman and Mar, first met Kristen in 2015 through an introduction by another company founder associated with the Stanford Genome Technology Center. At that time, Kristen was a postdoc at Stanford in Professor Stuart Kim’s lab, where she studied the genetics of extreme human longevity. At the time, Kristen had published extensively in the space of genetics and longevity, but the company was merely an idea. She was pondering the question: could we use genetic information and new machine learning techniques to develop a therapy discovery platform for longevity? Kristen’s vision at the time was just as clear as it is today. 

That year, Pear invested in BioAge’s initial seed financing, and we have gone on to successively back BioAge at every subsequent round, including the Series D. 

As it’s not common for a seed-stage focused firm like ours to invest up until the Series D round, why have we continued to support BioAge?

Significant unmet need and large market opportunity in obesity and metabolic disease

The company’s lead drug program, azelaprag, addresses obesity and metabolic diseases. A staggering 40% of American adults are considered obese, and many suffer from a host of comorbidities including diabetes, heart disease, and stroke.

One of the most exciting recent medical advances has been the remarkable success of GLP-1 receptor agonist drugs in achieving dramatic weight loss in such patients, while still being generally safe and well tolerated.

With this drug class expected to eventually exceed $150 billion in sales annually, the top two developers, Eli Lilly and Novo Nordisk, have catapulted to become the first and second largest pharma companies by market capitalization (~$740B and $550B, respectively, as of mid-Feb. 2024). 

As impressive as GLP-1 drugs are, one downside is that they can result in suboptimal body composition, in that they lead to the loss of both fat and muscle. BioAge’s preclinical studies have shown that azelaprag, which is a first-in-class oral apelin receptor agonist, can enhance body composition when combined with a GLP-1 drug. In a Phase 1b study sponsored by BioAge, azelaprag prevented muscle deterioration and promoted muscle metabolism in healthy older volunteers at bedrest.

A second limitation is that oral GLP-1 drugs have so far lagged behind the injectable versions in efficacy. Of course, most patients would strongly prefer orally dosed medications over injectables. In BioAge’s preclinical studies, azelaprag combined with a GLP-1 drug has been shown to double the weight loss achieved by the GLP-1 drug alone. Because it can be orally administered and has been well tolerated, azelaprag in combination with an oral GLP-1 drug may help to close this efficacy gap.

Human-first target discovery platform enabled by multi-omic analysis of aging human cohorts 

BioAge didn’t initially begin with a focus on a lead therapeutic asset in obesity. In fact, BioAge started as a target discovery company within the longevity space, with the ambitious goal of understanding the biology of human aging in an effort to extend human lifespan and healthspan. 

Although the longevity field has recently attracted much attention and investment, not all therapeutic strategies pursued have been equally scientifically rigorous. Many approaches rely on attempting to translate into humans tantalizing life extension or rejuvenation effects obtained in model organisms with very short lifespans like nematodes and mice.

But the biology of aging differs dramatically across species, and BioAge’s unique strategy was to partner with special biobanks that collected and stored blood from cohorts of people from middle age until death and that retained associated health records. By deploying multi-omics (primarily proteomics) and AI to interrogate the factors correlating with healthy human aging, the company generated unique insights into particular therapeutic targets of interest. 

From this platform, one of the strongest targets that emerged was the peptide that azelaprag is designed to mimic – apelin. Exercise stimulates release of apelin from skeletal muscle into the blood, and in BioAge’s cohorts, middle-aged people with more apelin signaling were living longer, with better muscle function, and better brain function. Correspondingly, in mice, azelaprag protected elderly mice from muscle atrophy & preserved function in vivo

Strong leadership team, advisors, and partners

As one might imagine, the team at BioAge has grown and matured substantially since inception in 2015. The leadership team today has world-class experience across biopharma. And in pursuing its Phase 2 study of azelaprag in combination with Eli Lilly’s GLP-1/GIP drug tirzepatide (Zepbound), BioAge will receive support from Eli Lilly’s Chorus organization, including the supply of tirzepatide and clinical trial design and execution expertise.  

It’s certainly uncommon for a postdoc straight out of the lab to lead a therapeutics company until a Phase 2 clinical study. But as Kristen relayed during a fireside chat at our Pear office, she learned a lot about what she needed to know on the job progressively over time, and she was not afraid to surround herself with experts specializing in the many functional domains required to take a drug program from a target to the clinic. 

This dedication to continual self-improvement and learning has been a hallmark of the many strong founders that we are fortunate to back at Pear. We are grateful that Kristen is helping to guide the next generation of such founders as part of our Pear Biotech Industry Advisory Council.

Pear Biotech Industry Advisory Council

For these reasons, we remain excited to support BioAge Labs. We eagerly anticipate the results of its mid-stage clinical trials of azelaprag in obesity, as well as the development of additional programs nominated from its unique human aging target discovery platform. 

Pear Biotech Bench to Business: insights on the past, present, and future of synthetic biology with Dr. Jim Collins

Here at Pear, we specialize in backing companies at the pre-seed and seed stages, and we work closely with our founders to bring their breakthrough ideas, technologies, and businesses from 0 to 1. Because we are passionate about the journey from bench to business, we created this series to share stories from leaders in biotech and academia and to highlight the real-world impact of emerging life sciences research and technologies. This post was written by Pear Partner Eddie and Pear PhD Fellow Sarah Jones.

Today, we’re excited to share insights from our discussion with Dr. Jim Collins, Termeer Professor of Medical Engineering and Science at MIT. Jim is a member of the Harvard MIT Health Sciences Technology faculty, a founder of the Wyss Institute for Biologically Inspired Engineering at Harvard, and a member of the Broad Institute of MIT. His work has been recognized with numerous awards and honors over the course of his career, such as the MacArthur “Genius” Award and the Dickson Prize in Medicine.

Hailed as one of the key pioneers of synthetic biology, Dr. Collins has not only published numerous high-profile academic papers, but also has a track record of success as a founder and as an entrepreneur, co-founding companies such as Synlogic, Senti Biosciences, Sherlock Biosciences, Cellarity, and Phare Bio. If all that wasn’t enough, he’s even thrown the first pitch at a Boston Red Sox game. We were lucky to sit down and chat with Jim about his experiences and his perspective on the future of synthetic biology. 

If you prefer listening, here’s a link to the recording! 

Key takeaways:

1. At its conception, synthetic biology was simply a ‘bottom-up’ approach to molecular biology utilized by collaborative, interdisciplinary scientists. 

  • In the late 90’s, Jim’s focus in biology began to shift: rather than continuing to explore biology at the whole organism or tissue level, he found himself more excited about molecular-scale biology. After speaking with some bioengineering faculty members at Boston University who were interested in his background in physics and engineering, Jim was quickly invited to join the department. From there, his interest in designing and engineering natural networks and biological processes flourished. 
  • At that time, however, bioengineers weren’t yet able to reverse engineer biological systems and exert precise control at the molecular scale. He asked, 

Could we take a bottom-up approach to molecular biology? Could we build circuits from the ground up as ways to both test our physical and mathematical notions and also to create biotech capabilities?

  • Though it didn’t start out as a quest to launch a new scientific field, Jim’s work contributed heavily to what would become the foundation of synthetic biology. He noted the value in bringing together scientists with diverse backgrounds to work on the same problems; for example, neuroscience had greatly benefitted from the introduction of mathematical models to describe complex neural systems. In a similar way, physicists, mathematicians, and molecular biologists began to find themselves interested in the same sorts of complex biological questions that could not be answered by any one discipline alone. 
  • Jim also acknowledged that in the early days, the tools to engineer gene networks and molecular pathways did not exist, yet his team could envision a future in which gene networks could be described and designed using elegant mathematical models and a modular set of biological tools. This goal helped to propel synthetic biology into existence.

2. The ability to program genetic circuits marked the beginning of synthetic biology and allowed efforts within the field to quickly progress. 

  • One notable 1995 publication in Science authored by Lucy Shapiro and Harley McAdams that was titled ‘Circuit simulation of genetic networks’ helped to shape Jim’s efforts in programming genetic circuits. The paper explored parallels between electrical circuits and genetic circuits and used mathematical modeling to accurately describe the bacteriophage lambda lysis-lysogeny decision circuit. In this circuit, bacteriophages that have infected bacteria cells must decide whether they are going to kill the cell or remain dormant, sparing the cell’s life.
  • Such work helped to bridge the gap between bioengineering and molecular biology at a time when many bioengineers felt largely excluded from the world of molecular biology.
  • To prove that genetic engineering was possible, the Collins lab worked to develop a genetic toggle switch in the form of a synthetic, bi-stable regulatory genetic network that could be switched ‘on’ or ‘off’ by applying heat or a particular chemical stimuli. This is significant because researchers could now add well-defined genetic networks to cells in order to precisely control their behavior or output.
  • This work by Gardner et al. was published in 2000 in the prestigious scientific journal, Nature and was titled “Construction of a genetic toggle switch in Escherichia coli.” Interestingly, in the same issue of Nature, work by Mike Elowitz’s lab at Caltech also outlined the development of a synthetic gene circuit in E. coli. Their system, dubbed the ‘Repressilator,’ was also a regulatory network in which three feedback loops could oscillate over time and change the status of the cells. Basically, it was three genes in a ring where gene A could inhibit gene B, which could inhibit gene C, which could then inhibit gene A, creating an oscillatory network. 
  • This critical body of work and scientific discovery both demonstrated that genetic engineering was possible and highlighted tools and methods that could be used to modulate molecular systems. 

3. To expand the repertoire of synthetic biology, Jim has co-founded two companies, Synlogic and Senti Biosciences, that are aimed at targeting the gut microbiome and engineering the mammalian system.

  • While initial excitement for synthetic biology applications centered on biofuel generation, the small scale bioreactors were never a match for fossil fuel companies. The paradigm in synthetic biology started shifting away from biofuel generation in the early 2000s to focus on the microbiome and its role in human disease. 
  • As local venture capitalists approached Jim and asked about what could actually be done with synthetic biology, it became clear to Jim that there were two main directions he could pursue. 

One was…an opportunity to create a picks and shovels company in synthetic biology. So, coming to create additional components or capacity to address a broad range of indications and applications, be it biofuels, industrial applications, therapeutics. The second was that you could engineer microbes to be living therapeutics, and in some cases, living diagnostics.

  • Jim partnered with Tim Lu, his former student and eventual coworker at MIT, to start Synlogic. One early direction of Synlogic was tackling a rare genetic metabolic disorder, phenylketonuria (PKU), that causes the amino acid phenylalanine to build up in the body. The idea was that they could engineer a microbe that could break down this byproduct and thereby eliminate the negative effects of the disease. This approach relied on the ability of the synthetic biologist to directly harness and control cell behavior via genetic engineering. 
  • Synlogic is also working on enzymes that produce therapeutic molecules instead of degrading toxic ones. The company now has efforts in inflammatory bowel disease and Lyme disease and has partnered with Roche to advance its pipeline. 
  • By around 2015, synthetic biology had continued to grow as an academic discipline and had moved beyond microbes to mammalian cells. Jim had since moved his lab from Boston University to MIT, and it wasn’t long before he was once again collaborating with Tim Lu, this time to apply synthetic biology in a mammalian system. This marked the start of Senti Biosciences, a company aimed at creating ‘smart medicines’ using genetic circuits.

We began to consider the possibility that we could do a mammalian version of Synlogic. Could we begin to really advance the development of human cell therapy and gene therapy using synthetic genes and gene circuits to create smart medicines? Having therapeutics that could sense their environment, sense the disease state or sense the disease target and produce therapeutics in a meaningful, decision-making way… was an exciting notion.

4. Historically, a lack of support from the venture community and insufficient infrastructure have been challenges for the diagnostics space.

  • Another company Jim helped start, Sherlock Biosciences, also leverages synthetic biology but operates in the diagnostic space. Although the diagnostic space is a notoriously challenging one, Sherlock was founded with the goal of combining approaches from synthetic biology and CRISPR technology to develop next-generation molecular diagnostics for at-home tests.
  • While many of the companies started right before the COVID-19 pandemic ultimately didn’t make it long-term, the team at Sherlock was able to quickly pivot and develop a CRISPR-based COVID-19 diagnostic that gained FDA-approval in May 2020. Notably, this test was the very first FDA-approved CRISPR product. 
  • Jim explained that the difficulties facing a company trying to operate in the diagnostics space are twofold:
    • (1) there is a lack of infrastructure for things like at-home testing, point-of-care testing, or nucleic acid tests
    • (2) there is a general lack of support for diagnostic companies in the venture community
  • Diagnostics companies are essentially valued as a multiple of revenue. In contrast, therapeutic companies can be valued based on projections 10-20 years in the future without the requirement of existing revenue. Combine this with the fact that wins tend to be much larger in the therapeutics space, diagnostic discovery and development have largely been set to the side. 
  • While COVID-19 did help to bring interest to the sector, funding and infrastructure continue to limit breakthroughs in diagnostics. 

5. Desperate for new antibiotics: a combination of synthetic biology, Machine Learning (ML) and in silico modeling has so far been fruitful.

  • With a challenging funding landscape, antibiotics have also been long-neglected by VC and industry. Despite this, Jim’s team was able to secure funding through The Audacious Project, a philanthropic effort put together by TED to support their work in antibiotic discovery. The funded project involved developing deep learning based models that could both discover and design novel antibiotics against some of the world’s nastiest pathogens. In fact, the team found success when they discovered a very powerful antibiotic called halicin. 
  • Recently published in Nature, an article by the Collins lab highlights their continued efforts in the “Discovery of a structural class of antibiotics using explainable deep learning.” 
  • Jim stressed the urgency for new antibiotic development: the pipeline has been drying up, but the demand has only increased. Acquired antibiotic resistance is also a significant problem that hasn’t yet been resolved.
  • As new, powerful antibiotics are developed, they become the last-line of defense against the worst, most deadly pathogens. However, drugs used as a last-line of defense don’t make it off the shelves very often: this means that there is less financial motivation to develop particularly potent antibiotics. To address this, Jim noted that we are going to need a new financial model to sufficiently support research in this space.

6. Past the hype cycle: the synthetic biology of tomorrow.

  • The field has experienced its fair share of ups and downs. In speaking with Jim, it’s clear that the roller coaster of high expectations and disappointing failures has not diminished his excitement about the future of synthetic biology. 
  • In 2004, the initial hype cycle was centered on biofuels and their potential to replace fossil fuels. Unrealistic expectations combined with the high cost of biofuel production led to disappointment; people began to question whether or not synthetic biology could deliver. 
  • In the second hype cycle, bold claims and an attitude that synbio could solve every problem in the world led to yet another massive let-down and shift in attitude towards the field. 

I think the markets haven’t kept pace with the public statements that are being made by some of the high priests in the field. And that’s a shame. I do think synthetic biology will emerge as one of the dominant technologies of this century. Our ability to engineer biology gives us capabilities that can address many of the big challenges that we have. But it’s still going to take a lot of time, it’s still very hard to engineer biology, and biology is not yet an engineering discipline.

  • Successes in areas where biology still outcompetes chemistry have helped to put some points back on the board for synthetic biology. Increasing utilization in therapeutic development has leveraged the efficiency of biological systems and will help to pave the way for the next way of discoveries in the field. 
  • Technologies like cell-free systems also have Jim excited about the future of synthetic biology. 

Get to know Jim Collins: 

Early career and developing a passion for science: 

  • Jim comes from a family of engineers and mathematicians and has always found himself wanting to do science. Jim explained that when he was four years old, his dad was a part of a team that designed an altimeter for Apollo 11. 
  • Another seminal event that influenced Jim’s decision to become a scientist was the decline of his grandfather’s health after a series of strokes left him hemiplegic. After watching someone he loved not receive the care or have treatment options that could restore function, Jim was inspired to pursue biomedical engineering. 
  • Once he realized that he could interface with clinicians, entrepreneurs, and policy-makers as a professor, he realized that was the path for him.

Advice for early-stage founders:

  • Find a strong business team early on to help find market fit and to guide the development of your final product. Young scientists are not trained to be good CEO’s, and it’s often challenging to navigate these decisions if you don’t have the experience.
  • Make sure your strategy has a real market pull and is differentiated from other approaches.  

Perspectives in AI with Kamil Rocki, Head of Performance Engineering at Stability AI

At Pear, we recently hosted a Perspectives in AI fireside chat with Kamil Rocki, Head of Performance Engineering at Stability AI. We discussed breakthroughs at the hardware-software interface that are powering generative AI. Kamil has extensive experience with GPU hardware and software programming from his PhD research and his work at IBM, Nvidia, Cerebras, Neuralink, and of course now StabilityAI. Read a recap of that conversation below:

Aparna: Kamil, thank you for joining us. You’ve accomplished many amazing things in your career, and we’re excited to hear your story. How did you choose your career path and what led you to work on the projects you’ve been involved with?

Kamil: My journey into the world of technology began in my 20s. After a few years of rigorous mathematical studies, I found myself in a robotics lab. I was tasked with enabling a robot to solve a Rubik’s cube. The challenge was to detect the cube’s location in an image captured by a camera, and this had to be done at a rate of 100 frames per second. 

I was intrigued by the work my peers were doing in computer graphics using Graphics Processing Units (GPUs). They were generating landscapes and waves, manipulating lighting, and everything was happening in real-time. This inspired me to use GPUs to process the images for my project.

The process was quite challenging. I had to learn OpenGL from my friends, write images to the GPU, apply a pixel shader, and then read data back from the GPU. Despite the complexity, I was able to exceed the initial goal and run the process at 200 frames per second. I even developed a primitive version of a neural network that could detect the cube’s location in the image.

In 2008, around the time I graduated, CUDA came out and there was a lot of excitement around GPUs. I wanted to continue exploring this field and heard about a supercomputer being built in Japan based on GPUs and ended up doing a PhD in supercomputing. During this time, I worked on an algorithm called Monte Carlo Tree Search, deploying it on a cluster of 256 GPUs. At that time, not many people were familiar with GPU programming, which eventually led me to the Bay Area and IBM Research in Almaden.

I spent five years at IBM Research, then moved to the startup world. I had learned how to build chips, design computer architecture, and build computers from scratch. I was able to go from understanding the physics of transistors to building a software stack on top of that, including an assembler, compiler, and programming what I had built. One of my goals at IBM was to develop a wafer scale system. This led me to Cerebras Systems, where I co-designed the hardware. Later I joined Neuralink and then Nvidia, where I worked on the Hopper architecture. I joined Stability, as we are currently in a transition to Hopper GPUs. There is a significant amount of performance work required, and with my extensive experience with this architecture, I am well-equipped to contribute to this transition.

Aparna:  GPUs have become one of the most profitable segments of the AI value chain, just looking at Nvidia’s growth and valuation. GPUs are also currently a capacity bottleneck. How did we arrive at this point? What did Nvidia, and others, do right or wrong to get us here?

Kamil: Nvidia’s journey to becoming a key player in the field of artificial intelligence is quite interesting. Initially, Nvidia was primarily known for its Graphics Processing Units (GPUs), which were used in the field of graphics. A basic primitive in graphics involves small matrix multiplication, used for rotating objects and performing various view projection transformations. People soon realized that these GPUs, efficient at matrix multiplications, could be applied to other domains where such operations were required.

In my early days at the Robotics Lab, I remember working with GPUs like the GeForce 6800 series. These were primarily designed for graphics, but I saw potential for other uses. I spent a considerable amount of time writing OpenGL code to set up the entire pipeline for simple image processing. This involved rasterization, vertex shader, pixel shader, frame buffer, and other complex processes. It was a challenging task to explore the potential of these GPUs beyond their conventional use.

Nvidia noticed that people were trying to use GPUs for general-purpose computing, not just for rendering images. In response, they developed CUDA, a parallel computing platform an application programming interface model. This platform significantly simplified the programming process. Tasks that previously required 500 lines of code could now be achieved with a program that resembled a simple C program. This opened up the world of GPU programming to a wider audience, making it more accessible and flexible.

Around 2011-12, the ImageNet moment occurred, and people realized the potential of scaling up with GPUs. Before this, CPUs were the primary choice for most computing tasks. However, the realization that GPUs could perform the same operations on different data sets significantly faster than CPUs led to a shift in preference. This was particularly impactful in the field of machine learning, where large amounts of data are processed using the same operations. GPUs proved to be highly efficient at performing these repetitive tasks.

This realization sparked a self-perpetuating cycle. As GPUs became more powerful, they were used more extensively in machine learning, leading to the development of more powerful models. Nvidia continued to innovate, introducing tensor cores that further enhanced machine learning capabilities. They were smart in making their products flexible, catering to multiple markets including graphics, machine learning, and high-performance computing (HPC). They supported FP64 computation, graphics, and tensor cores, which could be used for ray tracing and FP64. This adaptability and flexibility, combined with an accessible programming model, is what sets Nvidia apart in the field.

In the span of the last 15 years, from 2008 to the present, we have seen a multitude of different architectures emerge in the field of machine learning. Each of these architectures was designed to be flexible and adaptable, capable of being executed on a GPU. This flexibility is crucial as it allows for a wide range of operations, without being limited to any specific ones.

This approach also empowers users by not restricting them to pre-built libraries that can only run a single model. Instead, it provides them with the freedom to program as they see fit. For instance, if a user is proficient in C, they can utilize CUDA to write any machine learning model they desire.

However, some companies have lagged behind in this regard. Their mistake was in not providing users with the flexibility to do as they please. Instead, they pre-programmed their devices and assumed that certain architectures would remain relevant indefinitely. This is a flawed assumption. Machine learning architectures are continuously evolving, and this is a trend that I foresee continuing into the future.

Aparna: Could you elaborate more on the topic of special purpose chips for AI? Several companies, such as SambaNova Systems and Cerebras, have attempted to develop these. What, in your opinion, would be a successful architecture for such a chip? What would it take to build a competitive product in this field? Could you also shed some light on strategies that have not worked well, and those that could potentially succeed?

Kamil: Reflecting on my experience at Cerebras Systems, I believe one of the major missteps was the company’s focus on building specialized kernels for specific architectures. For instance, when ResNet was introduced, the team rushed to develop an architecture for it. The same happened with WaveNet and later, the Transformer model. At one point, out of 500 employees, 400 were kernel engineers, all working on specialized kernels for these architectures. The assumption was that these models were fixed and optimized, and users were simply expected to utilize our library without making any changes.

However, I believe this approach was flawed. It did not take into account the fact that architectures change frequently. Every day, new research papers are published, introducing new models and requiring changes to existing ones. Many companies, including Cerebras, failed to anticipate this. They were so focused on specific architectures that they did not consider the need for flexibility.

In contrast, I admire NVIDIA’s approach. They provide users with tools and allow them to program as they wish. This approach is more successful because it allows for adaptability. Despite the progress made by companies like Cerebras, Graphcore, and others, I believe too much time and effort is spent on developing prototypes of networks, rather than on creating tools that would allow users to do this work themselves.

Even now, I see companies building accelerators for the Transformer architecture. I would advise these companies to rethink their approach. They should aim for flexibility, ensuring that their architecture can accommodate changes. For instance, if we were to revert to recurrent nets in two years, their architecture should still be programmable.

Aparna: Thank you for your insights. Shifting gears, I’d like to talk about your work at Stability. It’s an impressive company with a thriving open-source community that consistently produces breakthroughs. We’ve observed the quality of the models and the possibilities with image generation. Many founders are creating companies using Stability’s models. So, my question is about the future of this technology. If a founder is building in this space and using your models as a foundation, where do you see this foundation heading? What’s the future of image generation technology at Stability?

Kamil: The potential of technology, particularly in the field of artificial intelligence, is immense. Currently, we’re seeing significant advancements in image generation models. The quality of these generated images is often astounding, sometimes creating visuals that are beyond reality, thereby accelerating creativity and content creation. We’re now extending this capability into 3D and video space. We’re actively working on models that can generate 3D scenes or objects and extend to video space. Imagine a scenario where you can generate a short clip of a dog running or even create an entire drama episode from a script.

We’re also developing audio models that can generate music. This can be combined with video generation to create a comprehensive multimedia experience. These applications have significant potential in the entertainment industry, from content generation for artists to the movie industry and game engine development.

However, I believe the real breakthrough will come when we move towards more industrial applications. If we can generate 3D representations and add video to that, we could potentially use this technology to simulate physical phenomena and accelerate R&D in the manufacturing space. For instance, generating an object that could be printed by a 3D printer. This could optimize and accelerate prototyping processes, potentially revolutionizing supply chains.

Recently, I was asked if a space rocket could be designed with generative AI. While it’s not currently feasible, the idea is intriguing and could potentially save a lot of money if we could solve complex problems using this technology.

In relation to hardware, I believe that generative AI and language models can be used to accelerate the discovery of new kinds of hardware and for generating code to optimize performance. With the increasing complexity and variety of models and architectures, traditional approaches to optimizing code and performance modeling are struggling. We need to develop more automated, data-driven approaches to tackle these challenges.

Aparna: You’ve broadened our understanding of the potential of generative AI. I’d like to delve deeper into the technical aspects. As the head of Performance Engineering at Stability, could you elaborate on the challenges involved in building systems that can generate video and potentially manufacture objects without error, performing exactly as intended?

Kamil: From a performance perspective, the issue of being limited by computational resources is closely related to the first question. At present, only a few companies can afford to innovate due to the high costs involved.  

This situation might actually be beneficial as it could spark creativity. The scarcity of resources, particularly GPUs, could trigger innovations on the algorithmic side. I recall a similar situation in the early days of computer science when people were predicting faster clock speeds as the solution to performance issues. It was only when they hit a physical limit that they realized the potential of parallelization, which completely changed the way people thought about performance.

Currently, the cost of building a data center for training state-of-the-art language models is approaching a billion dollars, not including the millions of dollars required for training. This is not a sustainable situation. I miss the days when I could run models and prototype things on a laptop.

One of the main problems we face is that we’ve allowed our models to become so large, assuming that compute infrastructure is infinite. These larger models are becoming slower because more time is spent on moving data around rather than on the actual computation. For instance, when I was at Nvidia, anything below 90% of the so-called ‘speed of light’ was considered bad. However, in many cases, large language models only utilize about 30-40% of the peak performance that you can achieve on a GPU. This means a lot of compute power is wasted.

People often overlook this issue. When I suggest optimizing the code on a single GPU and running it on a small model before scaling up, many prefer to simply run it on multiple GPUs to make it faster. This lack of attention to optimization is a significant concern.

Aparna: As we wrap up, I’d like to pose a final question related to your experience at Neuralink, a company focused on brain-to-robot interaction. This technology has potential applications in assisting differently-abled individuals. Could you share your perspective on this technology? When do you anticipate it will be ready, and what applications do you foresee?

Kamil: My experience at Neuralink was truly an exciting adventure. I had the opportunity to work with a diverse team of neuroscientists, physicists, and biologists, all of whom were well-versed in computing and programming. Despite the initial intimidation, I found my place in this team and contributed to some groundbreaking work.

One of the primary challenges we aimed to address at Neuralink was the communication barrier faced by individuals whose cognitive abilities were intact, but who were physically unable to express themselves. This issue is exemplified by renowned physicist Stephen Hawking, who could only communicate by typing messages very slowly using his eyes.

Our initial project involved training macaque monkeys to play a Pong game while simultaneously feeding data from their motor cortex. This allowed us to decode brain signals and enable the monkeys to control something on the screen. Although it may not seem directly related to human communication, this technology could potentially be used to control a cursor and type messages, thus bypassing physical limitations.

We managed to measure the information transfer rate from the brain to the machine in bits per second, achieving a rate comparable to that of people typing on their cell phones. This was a significant milestone and one of the first practical applications of our technology. It could potentially benefit individuals who are paralyzed due to spinal injuries, enabling them to communicate despite their physical limitations.

However, our work at Neuralink wasn’t limited to decoding brain signals and reading data. We also explored the possibility of stimulating brain tissue to induce physical movements or visual experiences. This bidirectional communication could potentially allow individuals to interact with computers more efficiently, bypassing the need for physical input devices. It could even pave the way for a future where VR goggles are obsolete, as we could stimulate the visual cortex directly. However, the safety of these techniques is still under investigation, and it’s crucial that we continue to prioritize this aspect as we push the boundaries of what’s possible.

There’s a significant spectrum of disorders that this technology could address, particularly for individuals who struggle with mobility or communication. We were also considering mental health issues such as depression, insomnia, and ADHD. One of the concepts we were exploring is the ability to read data from the brain, identify its state, and stimulate it. This could potentially serve as a substitute for medication or other forms of treatment.

However, it’s important to note that the technology, while progressing, is not entirely clear-cut. The safety aspect is crucial and cannot be ignored. At Neuralink, we’ve done a remarkable job ensuring that everything we develop is safe, especially considering these devices are implanted in someone’s head.

When we consider brain stimulation, we must also consider potential negative scenarios. For instance, if we stimulate a certain region of the brain to alleviate depression, we could inadvertently create a dependency, similar to injecting dopamine. This could potentially lead to a loop where the individual becomes addicted to the stimulation. It’s a complex issue that requires careful consideration and handling.

In addressing these challenges, we’ve engaged in extensive conversations with physicians, neuroscientists, and other experts. While some companies may have taken easier paths, potentially compromising safety, we’ve chosen a more cautious approach. Despite the slower progress, I can assure you that whatever we produce will be safe. This commitment to safety is something I find particularly impressive.

Kamil: For those interested, it’s worth noting that Neuralink is currently hiring. They’ve recently secured another round of funding and are actively seeking new talent. This is indeed a glimpse into the future of technology.

Aparna: Earlier, you mentioned an intriguing story about monkeys and reading their brainwaves. This story is related to the AI that’s been implanted in their brains and how it communicates. Could you elaborate on what happens with the models in this context?

Kamil: In our initial approach to decoding brain signals, we utilized a simple model. We had a vector of 1024 electrodes and our goal was to infer whether the monkey was attempting to move the cursor up, down, or click on something. We used static data from what we termed a pre-training session, which was essentially data recorded from the implant. The model was a two-layer perceptron, quite small, and could be trained in about 10 seconds. However, the brain’s signal distribution changes rapidly, so the model was only effective for about 10 to 15 minutes before we observed a degradation in performance. This necessitated the collection of new data and retraining of the model.

Recently, Neuralink has started exploring reinforcement learning-based approaches, which allow for on-the-fly identification and retraining of the model on the implant. During my time at Neuralink, my focus was primarily on the inference side. We trained the model outside the implant, and my role was to make the inference parts work on the implant. This was a significant achievement for us, as we were previously sending data out and back in. Given our battery limitations, performing tasks on the implant was more cost-effective. The ultimate goal was to move the entire training process to the implant.

Every day, our brains produce varying signals due to changes in our moods and environments. These factors could range from being in a noisy place, feeling tired, or engaging in different activities. This results in a constantly shifting distribution of brain signals, which presents a significant challenge. This phenomenon is not only applicable to the brain but also extends to other applications in the medical field.

Aparna: We’ve discussed a wide range of topics, from hardware design to image and video generation, and even brainwaves and implant technology. Thank you so much for these perspectives Kamil!

Thank you to Kamil for his perspectives on these exciting AI topics. To read more about Pear’s AI focus and previous Perspectives in AI talks, visit this page.