PearX Demo Day W23

Over the past decade, Pear has spearheaded pre-seed rounds for exceptionally early-stage companies. Beginning of this year, PearX was designed to be a 14 week bootcamp for ambitious founders that has proven to be a breeding ground for category-defining pioneers such as Affinity, Xilis, Capella Space, Nova Credit, Cardless, and Viz.ai.

The W23 cohort of PearX was no exception and consisted of thirteen cutting-edge companies pushing the boundaries of AI, healthcare, consumer products, fintech, and climate solutions. Our Demo Day for W23 happened on May 25th and was hugely successful: 1500+ investors tuned into the livestream and we’ve already been able to facilitate more than 650+ investor introductions for the 13 participating companies. 

With an acceptance rate of less than 0.25%, these companies have risen to the top among a pool of 4,500 applicants, a testament to their exceptional talent and groundbreaking ideas. Within this cohort, we also have significant female leadership presentation with over 40% of the CTO/CEOs being female (only 9% of venture-backed entrepreneurs are women). We’re proud that PearX W23 is setting a strong example of gender diversity and equity in entrepreneurship. 

Within this cohort, we also have significant female leadership presentation with over 40% of the CTO/CEOs being female (much higher than industry average).

So, without further ado, let’s delve into the extraordinary companies that make up the PearX W23 cohort:

AI:

Hazel

Founders: Henry Weng and Vedant Khanna 

Hazel is the AI-powered operating system for realtors that 10x’s their productivity and increases sales. Nearly half of a realtor’s work consists of repetitive backend tasks like middleman communication, document preparation, and project management. Realtors can’t manage backend work on their own as their business scales, so the largest ones operate like SMBs with staff, software, and assistants. Hazel integrates with a realtor’s email, text, and knowledge bases, leveraging AI to parse unstructured data and become their single system of record (and supercharged CRM). From that point, Hazel automates routine tasks using generative AI, freeing realtors to focus on getting more clients instead of more paperwork.

SellScale
Founders: Ishan Sharma and Aakash Adesara

SellScale is an AI-powered Sales Development Representative. SellScale streamlines the end-to-end process of setting demos for salespeople, addressing challenges such as high SDR turnover rates and low email conversion rates. By leveraging AI-generated copy that outperforms human-produced content, SellScale offers scalable and data-driven solutions, allowing sales teams to achieve superior results in their outreach efforts without excessive investments in personnel and tools.

Tare
Founders: Eileen Dai and Max Sidebotham

Tare is an AI-powered email marketing solution that makes data-driven automation easy for modern e-commerce brands. Until now, email has been an extremely manual & resource-heavy marketing channel, and it wasn’t possible to automate everything from end-to-end. 66% of current e-commerce email spend goes to labor & agency service costs. Tare helps brands optimize their email marketing with automated customer segmentation, AI-generated content & imagery, and automated scheduling for delivery.

Saavi

Founders: Maya Mikhailov 

Saavi revolutionizes enterprise AI deployment, eliminating costly and complex processes. This user-friendly platform enables effortless implementation without the need for developers or data scientists, providing quick deployment in minutes. With its intuitive interface and self-optimizing capabilities, Saavi empowers businesses to unlock the potential of AI, delivering tailored insights for informed decision-making in areas like fraud risk assessment, customer churn prediction, and growth identification. Embrace the future of AI deployment with Saavi and experience ease, speed, and accuracy in transforming your enterprise.

Consumer: 

Champ

Founders: Ryan Rice

Champ is a disruptive fantasy sports platform that caters specifically to college sports fans. Unlike major platforms that focus on professional leagues, Champ offers an ownership-based fantasy experience for college football and basketball. It empowers fans to buy, sell, and collect digital trading cards featuring their favorite collegiate athletes, using them to construct lineups for fantasy leagues. By addressing the weak ties many fans have to professional leagues and capitalizing on the significant influence of college sports in certain areas, Champ allows users to actively participate in the excitement of college sports and forge deeper connections with their teams.

Ego

Founders: Vishnu Hair  and Peggy Wang

Ego is an immersive live streaming platform that uses Gen AI to create fully face-tracked 3D avatars. We believe in a future where streaming as a virtual avatar surpasses real-life live streaming, driven by the digital natives of Gen Z who seek pseudonymous online identities. To realize this vision, we developed an app that enables users to generate a 3D avatar, which perfectly mirrors their facial expressions, and live stream on platforms like Twitch and YouTube in 90 seconds. Users can profit from selling virtual goods, customizing avatar appearances, engaging in entertaining mini-games, or orchestrating immersive role-plays. 

Afterhour

Founders: Kevin Xu

AfterHour revolutionizes the retail stock trading experience by providing a pseudonymous and voyeuristic social network that focuses on verified trades and stock picking gurus, addressing the lack of trust and entertainment in the online trading community. By connecting brokerage accounts via Plaid and making portfolios public, users gain transparency and can distinguish between legitimate shareholders and random lurkers, while maintaining anonymity. AfterHour’s unique approach combines anonymous yet verified engagement, bridging the gap between authenticity and privacy in a way that no other platform currently does.

Pando

Founders: Charlie Olson and Eric Lax

Pando introduces Income Pooling, a solution that offers risk diversification and fosters a financially-aligned community to address the increasing shift toward a winner-take-all economy. By leveraging machine learning and artificial intelligence, Pando accurately predicts future earnings, enabling groups of high potential individuals to pool their income. With product-market fit in professional baseball and entrepreneurship, Pando has created a new asset class for all professionals in power-law careers by unlocking the value of potential income and reimagining the financial services experience for these individuals.

Juni

Founders: Vivan Shen

Juni revolutionizes personalized tutoring by bringing it online, addressing the scalability challenges and affordability issues of traditional private tutoring. With purpose-built AI models, Juni provides tailored content, questions, and hints to meet individual students’ needs while incorporating context and motivation. The platform establishes a feedback loop with students to customize instructor feedback and enhance the tutoring experience. By leveraging a specialized dataset and the expertise of top instructors, Juni ensures high-quality tutoring for all students, aiming to make educational support accessible and empowering them to thrive.

Climate:

Lasso

Founders: Nicole Rojas and David Pardavi  

Lasso is reshaping on-farm emissions reduction with an automated software platform that streamlines data collection, carbon footprint calculation, and form filling, reducing the time spent from over 60 hours to just one hour per farm. Seamlessly integrating with existing on-farm software systems, Lasso extracts vital information and provides real-time insights through integration with carbon calculator tools. By automating verification, funding forms, and auditing reports, Lasso eliminates manual creation and consolidates farm, project, and emissions data for efficient management and cost savings, making GHG emissions reduction a reality at scale.

Nuvola

Founders: Janet Hur

Nuvola is an advanced battery materials company focused on enhancing battery profitability and eliminating the primary cause of battery fires. With their innovative Safecoat Direct Deposition Separator technology, they reduce reject material costs by 82% and improve yield by 50%, addressing the challenges faced by the Lithium-Ion Battery Industry, which is projected to exceed $200 billion in the next five years. Their solution tackles the error-prone process of folding separator film, which has been responsible for catastrophic fires and billions of dollars in recalls. By providing safer and more efficient batteries, Nuvola supports the growth of electric vehicles, energy storage, and e-aircraft industries.

Healthcare:

Stellation

Founders: Michelle Xie and  Diana Zhu 

Stellation is rethinking patient-provider matching with its analytics platform, addressing the lack of accurate information in matching patients with healthcare providers based on their specific health needs. Current practices rely on proximity and availability, neglecting performance data and resulting in limited outcomes and cost savings. Stellation’s SaaS platform bridges this gap by leveraging claims and patient data to profile provider strengths and match individuals with the most suitable healthcare provider. Built on a research-backed methodology, this powerful solution is delivered via API integration for health plans or a user-friendly search interface for patients. By utilizing Stellation, health plans can achieve significant cost savings and improved outcomes for their members.

Enterprise:

Polimorphic

Founders: Parth Shah and Daniel Smith

Polimorphic revolutionizes government operations by automating back-office tasks and transforming how cities track and manage requests, replacing outdated analog methods with efficient digital solutions. With a significant reliance on post-it notes and paper files, government processes face inefficiencies and the impending challenge of a labor shortage. Polimorphic’s focus on the customer service layer of government empowers cities to streamline operations, enhance compliance, and improve the overall citizen experience by offering solutions for constituent requests, payment collection, request tracking, and data management.

Today’s markets are shifting faster than ever before, and Pear is dedicated to evolving alongside them. Each of these 13 company from PearX W23 showcases immense potential for disrupting their respective industries and driving positive change through the power of technology.

At Pear, we are honored to play a pivotal role in the journey of these exceptional startups. We believe in the transformative power of early-stage investments and the tremendous impact they can have on shaping the future. As we continue to identify and nurture the next generation of trailblazers, we remain steadfast in our commitment to fostering innovation, driving growth, and unlocking the extraordinary potential within each entrepreneur we encounter.

We believe in the transformative power of early-stage investments and the tremendous impact they can have on shaping the future.

If you want to partner with any of these companies, please check them out at: https://demoday.pear.vc/. For anyone who missed our Demo Day, we’ll be sharing videos of each team this Thursday, so stay tuned!

Pear and Pacific Western Bank Partnership

We have some exciting news to share: Pear and Pacific Western Bank have partnered up to bring our pre-seed and seed startups a deeper level of banking products and services. With this partnership, PWB will be offering dedicated support and exclusive banking offerings to Pear founders to help them scale their startups from the earliest days.

Pear has been focused on pre-seed and seed investing for 10 years now, partnering with founders at the earliest stages to turn great ideas into category-defining companies. We know now more than ever that banking partners are a really critical part of the startup ecosystem. 

“We love partnering with PWB, because they’re truly dedicated to supporting the growth of entrepreneurs. They understand the challenges that come with scaling and offer a wide range of banking services and products. Many of our portfolio companies have partnered with PWB, and we’ve seen that they can offer the banking structure and resources startups need to navigate these challenging times,” says Pejman, Founding Managing Partner of Pear.

As part of this partnership, PWB will provide hands-on training to PearX, Female Founders Circle, and Pear Dorm members, offering to help navigate all things startup banking – from setting up your first corporate bank accounts to understanding the implications of the current market. 

“Pacific Western Bank is very excited to partner with Pear VC and support our mutual commitment to innovation in the technology and life science ecosystems”, added Mark diTargiani, SVP, Venture Debt & Startup Banking at PWB. “The Pear team has built a legacy of investing in quality people and companies, and our team has 15+ years of supporting VCs and founders as they build companies. We are aligned on helping founders from inception to exit, and by combining our networks and resources we can accomplish that in Silicon Valley and beyond.”

We’re excited for the expertise PWB brings to the table, helping our startups navigate the current financial landscape and be better set up to grow and surpass their business goals.

Perspectives in Generative AI with Stanford Researchers

We recently hosted a fireside chat on safe and efficient AI with notable Stanford CS PhD researchers Dan Fu and Eric Mitchell. The conversation covered various aspects of AI technology, including the innovations that Dan and Eric have pioneered in their respective fields.

Dan is co-inventor of FlashAttention. He’s working on improving efficiency and increasing the context length in Large Language Models (LLMs). His experience in developing groundbreaking AI technologies allows him to provide profound insights into the future capabilities of LLMs. During the event, Dan discussed the implications of his work on enabling new generative AI use cases, as well as brand new techniques for efficient training.

Eric’s work focuses on AI safety and responsibility. He is the co-author of DetectGPT, a tool capable of differentiating between AI-generated and human-generated text. In recent times, DetectGPT has gained press attention for its innovative approach to addressing the growing concern with AI-generated content. Eric shared his thoughts on the potential impact of DetectGPT and similar tools, discussing the necessity for safe AI technologies as the field expands.

During the discussion, we touched on  practical applications of generative AI, and the forecast for open source vs. proprietary LLMs. We also touched on the prospect of AGI, the ethical ramifications, cybersecurity implications, and overall societal effects of  these emerging technologies.

For those who couldn’t attend in person, we are excited to recap the high points today (answers are edited and summarized for length):

Aparna: Can you tell us a bit about yourselves and your motivation for working in AI?

Dan: I focus on making foundation models faster to train and run, and I’m interested in increasing sequence length to allow for more context in the input data. Making sure you’re not limited by a specific number of tokens. But you can feed in as much data as you’d like, as much context and use that to teach the model what you want to say. I’ve been interested in machine learning for a long time and have been at Stanford for five years now. It’s a thrilling time to work in this field.

Eric: I’m a fourth-year PhD student at Stanford, and I got into AI because of my fascination with the subjective human experience. I’ve taken a winding road in AI, starting with neuroscience, 3D reconstruction, robotics, and computer vision before being drawn to the development of large language models. These large language models are really powerful engines, and we’re sort of just starting to build our first cars that can drive pretty well. But we haven’t built the seatbelts and the antilock brakes, and these safety and quality of life technologies. So that’s what I’m interested in.


Aparna: What major breakthroughs have led to the recent emergence of powerful generative AI capabilities? And where do you think the barriers are to the current approach?

Dan: That’s a really great question. There has been a seismic shift in the way that machine learning (ML) is done in the past three to four years. The old way was to break a problem into small parts, train models to solve one problem at a time, and then use those building pieces to build up a system. With foundation models, we took the opposite approach. We trained a model to predict the next word in a given text, and these models can now do all sorts of things, like write code, answer questions, and even write some of my emails. It’s remarkable how the simplest thing can scale up to create the largest models possible. Advances in GPUs and training systems have also allowed us to scale up and achieve some incredible things.

I think one of the barriers is the technical challenge of providing sufficient context to the models, especially when dealing with personal information like emails. Another barrier is making these models more open and accessible, so that anyone can see what goes into them and how they were trained. So making the process more open the same way that anybody can look at a Kubernetes stack and see exactly what’s happening under the hood. Or anybody can open up the Linux kernel and figure out what is running under there. those are frontiers that I hope that we push on pretty quickly. This would enable better trust and understanding of the models.

Eric: I agree with Dan’s points. Additionally, a challenge we’re facing is the need to solve specific problems with more general models. However, we’ve found that large scale self-supervised training can be effective in tackling these specific problems. For example, the transformer architecture has been helpful in representing knowledge efficiently and improving upon it. In general, the ability to do large scale self-supervised learning on just a ton of data has been key to the recent progress.

Furthermore, we need a way to explain our intent to the model in a way that it can correctly interpret and follow it. This is where the human preference component comes in. We need to be able to specify our preferences to the model, so that it can draw upon its knowledge and skills in a way that is useful for us. This is a qualitative shift in how these models interact with society, and we are only scratching the surface.

Aparna: I’d like to go a little bit deeper technically. Dan, could you explain how your work with attention has made it possible to train these large generative AI models?

Dan: Sure, I can give a brief overview of how attention works at a high level. So you have these language models, and when you give it a sentence, the attention mechanism compares every word in that sentence to every other word in that sentence. If you have a databases background, it’s kind of like a self join, where you have a table that is your sentence, and then you join it to itself. This leads to some of the amazing abilities that we’ve seen in generative AI. However, the way that attention used to be calculated was quite inefficient. You would compare every word to every other word, resulting in a hard limit on the context of the models. This meant that the maximum context length was around 2000, which is what could fit in memory on an A100 GPU.

If you look at databases and how they do joins, they don’t write down all the comparisons between all the joins, they do it block by block.About a year ago, we developed an approach called Flash attention which reduced the memory footprint by doing the comparisons block by block. This enabled longer context lengths, allowing us to feed in a whole essay instead of just a page of text at a time. We’ve been really humbled by the very rapid adoption. It’s in PyTorch, 2.0. GPT-4, for example, has a context length of 8k, and a context length option of 32k.

Aparna: That’s really interesting. So, with longer context lengths, what kinds of use cases could it enable?

Dan: The dream is to have a model that can take all the text ever written and use it as context. However, there’s still a fundamental limitation to attention because even with a reduced memory footprint, you’re still comparing every word to every other word. If you think about how language works, that’s not really how we process language.  I’m sure you can’t remember every word I’ve said in the past few minutes. I can’t even remember the words I was saying. That really led us to think okay, are there some alternatives to attention that don’t scale fundamentally quadratically. We’ve been working on some models called Hungry Hungry Hippos. We have a new one called hyena, where we try to make the context length a lot longer. And these models may have the potential to go up to hundreds of thousands of words, or even millions. And if you can do that, it changes the paradigm of what you can do with these models. 

Longer context lengths enable more complex tasks such as summarization, question answering, and machine translation. It also allows for more efficient training on large datasets by utilizing more parallelism across GPUs. But if you have a context length of a million words, take your whole training set, feed it in as input, you could have an embodied AI, and have say a particular agent behave in a personalized way when responding to emails or talking to clients.

Longer context can also be particularly useful in modalities like images, where it means higher resolution. For example, in medical imaging, where we are looking for very small features, downsampling the image may cause loss of fine detail. In the case of self-driving cars, longer context means the ability to detect objects that are further away and at a higher resolution. Overall, longer context can help us unlock new capabilities and improve the accuracy of our models.

Aparna: How do you see the role of language models evolving in the future?

Dan:  I think we’re just scratching the surface of what language models can do, and there are so many different ways that they can be applied. One of the things that’s really exciting to me is the potential for language models to help us better understand human language and communication. There’s so much nuance and complexity to how we use language, and I think language models can help us unpack some of that and get a better understanding of how we communicate with each other. And of course, there are also lots of practical applications for language models, like chatbots, customer service, and more.

Personally, I’m very excited to see where small models can go. We’re starting to see models that have been trained much longer than we used to train them, like a 7 billion parameter model, or a 13 billion parameter model, that with some engineering, people have been able to get to run on your laptop. When you give people access to these models, in a way that is not super expensive to run, you’re starting to see crazy applications come out. I think it’s really just the beginning. 

Eric: It has been an interesting kind of phase change just going from GPT3 to GPT4. I don’t know how much people have played with these models side by side or if people have seen Sebastien Bubeck’s somewhat Infamous First Contact talk now where he kind of goes through some interesting examples. One thing that’s weird about where the models are now is that usually, the pace of progress was slower than the time it took to understand what the capabilities of the technology were, but recently, it felt like a bit of an inversion. I would be surprised to see this slowdown in the near future. And I think it changes the dynamic in research.

Most machine learning research is quantitative, focused on building models, evaluating them on datasets, and getting higher scores. However, Sebastien’s talk is interesting because it evaluates models qualitatively with no numbers, which feels less rigorous but has more credibility due to Sebastien’s rigorous research background. The talk includes impressive examples, such as a model drawing a unicorn or writing 500 lines of code for a 3D game. One fascinating example is the model coaching people in an interpersonal conflict, providing direct and actionable advice that is useful in real-life situations. A big caveat is that current outputs from GPT-4 are much worse than the examples given in the talk. Sebastien’s implication or claim is that aligning the model to follow human intent better reduces its capabilities. This creates a tough conflict between economic incentives and what’s useful for society. It’s unclear what people will do when faced with this conflict.

Aparna: Do you think there will be ethical concerns that arise as language models become more sophisticated?

Eric: Yeah, I think there are also going to be questions around ownership and control of these models. Right now, a lot of the biggest language models are owned by big tech companies, and there’s a risk that they could become monopolies or be used in ways that are harmful to consumers. So we need to be thinking carefully about how we regulate and govern these models, and make sure that they’re being used in a responsible and ethical way.

One of the big challenges is going to be figuring out how to make language models more robust and reliable. Right now, these models are very good at generating plausible-sounding text, but they can still make mistakes and generate misleading or incorrect information. So I think there’s a lot of work to be done in terms of improving the accuracy and reliability of these models, and making sure that they’re not spreading misinformation or bias.

Aparna:  Given your PhD research Eric, what are the main areas that warrant concern for AI safety and responsibility?

Eric: In summary, there are three categories of issues related to AI ethics. The first category includes concrete near-term problems that many in the AI ethics community are already working on, such as unreliable and biased models that may dilute collective knowledge. The second category is a middle-term economic alignment problem, where incentives in industry may not be aligned with making models that are safer or more useful for society. The third and longest-term category involves high-stakes decisions made by very capable models, which could be used by bad actors to do harm or may not align with human values and intentions. While some may dismiss the risks associated with these issues, they are worthy of serious consideration.

My research is focused on developing auxiliary technologies to complement existing mass-produced products. I am specifically working on model editing, pre-training models in safer ways, and developing detection systems for AI-generated texts. The aim is to give practitioners and regulators more tools to safely use large language models. However, measuring the capabilities of AI systems is challenging, and my team is working on building a comprehensive public benchmark for detection systems to help better assess their performance.

Aparna: I’m excited about the prospect of having evaluation standards and companies building tooling around them. Do you think there’ll be regulation? 

Eric: In my opinion, we can learn from the financial crisis that auditors may not always work in practice, but a system to hold large AI systems to sensible standards would be very useful. Currently, there are questions about what capabilities we can expect from AI systems and what technologies we have to measure their capabilities. As a researcher, I believe that more work needs to be done to give regulators the tools they need to make rules about the use of AI systems. Right now, we have limited abilities to understand why an AI model made a certain prediction or how well it may perform in a given scenario. If regulators want to require certain things from AI model developers, they need to be able to answer these questions. However, currently, no one can answer these questions, so maybe the only way to ensure public safety is to prohibit the release of AI models until we can answer them.

Aparna: Stanford has been a strong contributor to open source and we’ve seen progress with open models like Alpaca, Dolly, and Red Pajama. What are the advantages and disadvantages of open sourcing large language models?

Dan: As an open source advocate and a researcher involved in the Red Pajama release, I believe making these large language models open source can help people better understand their capabilities and risks. The release of the 1 trillion token dataset allowed us to question what goes into these models and what happens if we change their training data. Open sourcing these models and datasets can help with understanding their inner workings and building on them. This is crucial for responsible use of these models.

The effort behind Red Pajama is to recreate powerful language models in an open manner by collecting pre-training data from the internet and human interaction data. The goal is to release a completely open model that is auditable at every step of the process. Small models trained on a lot of text can become surprisingly powerful, as seen in the 7 billion parameter model that can fit on a laptop. The llama model by Facebook is not completely open, as it requires filling out a form and has questionable licenses.

Eric: The open source topic is really interesting. I think many people have heard about the call for a pause on AI research letter. Open source is great, and it’s why OpenAI relies on it a lot. However, a few weeks ago, a bug in an open source framework they were using caused some pretty shocking privacy violations for people who use Chat GPT, where you could see other people’s chat histories. In some sense, I think the cat is already out of the bag on the open source question. The pre-training phase is where a lot of the effort goes into these models, and we already have quite a few really large pre-trained models out there. So even if we paused right now and said no more big pre-trained models can be released, there’s already enough out there for anyone who is worried about it to worry a lot.

Aparna: So with these smaller models, running on laptops and on mobile and edge devices what new use cases will open up? 

Dan: Sure, I think it’s amazing that our phones have become so powerful over the past decade. If I could have a language model running on my phone that functions as well as the GPT models we have today, and can assist me in a conversational way, that would be awesome.

Eric: I think it’s exciting and cool from a privacy perspective to have these models running locally. They can be really powerful mental health professionals for people, and I believe these models can be meaningful companions to people as well. Loneliness sucks, and the COVID years have made this very clear to a lot of people. These are the types of interactions that these models are best suited for. They understand what we’re saying, they can respond intelligently, and they can ask us questions that are meaningfully useful.

From this perspective, having them locally to do these types of things can be really powerful. Obviously, there’s a significant dual-use risk with these models, and we’ve tried to do some work to partially mitigate these things. But that’s just research right now. There are already very real and powerful models out there.

I think it’s great and exciting, and I’d be lying if I said I can’t foresee any way this could be problematic in some ways. But the cat is out of the bag, and I believe we will see some really cool and positive technologies from it.

Aparna: My final question is about Auto GPT, a new framework that uses GPT to coordinate and orchestrate a set of agents to achieve a given goal. This autonomous system builds upon the idea of using specialized models for specific tasks, but some even argue that this approach could lead towards AGI. Do you believe this technology is real and revolutionary? 

Eric: Yes, Auto GPT is a real framework that uses large language models to critique themselves and improve their performance. This idea is powerful because it suggests that models can improve themselves without the need for constant human feedback. However, Auto GPT is not yet advanced enough to replace human jobs as it can still get stuck in loops and encounter situations where it doesn’t know what to do. It’s also not trustworthy enough to handle tasks that require a high level of complexity and verification. While the ideas behind Auto GPT are promising, it’s not a revolutionary technology in and of itself and doesn’t massively improve the capabilities of GPT.

Dan: So, I was thinking about what you said earlier about the generative AI revolution and how it’s similar to the internet boom in 2000. But I see it more like electricity, it’s everywhere and we take it for granted. It’s enabled us to do things we couldn’t before, but it has also displaced some jobs. For example, we don’t have lamplighters or people who manually wash clothes anymore. However, just like how people in the early 20th century imagined a future where everything would be automated with electricity, we still have jobs for the moment. It’s hard to predict all the impacts AI will have, but it will certainly change the types of jobs people are hired for. I think it’ll become more integrated into our daily lives and introduce new challenges, just like how electrical engineering is a field today. Maybe we’ll see the emergence of foundation model engineering. That’s just my two cents on AGI – I’m not sure if it’ll be fully realized or just a tool to enhance AI capabilities.

Eric: I think the employment question is always brought up in discussions about AI, but it’s not clear that these models can replace anyone’s job right now or in the near future. They are good for augmenting people, but not at tasks they’re not already qualified for. It’s not a drop-in replacement for humans. I don’t think we’ll see mass unemployment, like with the electricity revolution. The internet analogy is similar, in that it was thought to make people more productive, but it turned out to be a distraction tool as well. Generative AI may not have a net positive impact on productivity in the near term, but it will certainly entertain us.

AR/VR/XR/PEAR: our call for mixed reality builders 

In the past few months, we have seen the beginnings of rising interest in building AR, VR, mixed reality, and extended reality infrastructure and applications (we’ll just call it “XR” for short): XR applications to PearX are up 5x this year, dedicated XR hacker groups are proliferating at top engineering schools like Harvard and Stanford, and our tracking shows hundreds of founders have left venture-backed and major tech companies to build in the XR space.  

We think this is because XR has the potential to represent one of the most consequential new platforms of the coming decade, and there is substantial alpha to be had for builders who study this burgeoning space and seize early opportunities. 

We expect interest in building in XR only to increase dramatically – particularly following Apple’s upcoming Reality Pro headset announcement. We see builders with a measured sense of urgency having an advantage, and we’ve put together a high-level guide for exploring ideas in XR. What follows is merely one way of cataloging opportunities; we would love to meet and speak with founders who are building early, quickly, and curiously in the broader XR space. 

XR Builder’s Guide to Exploring Ideas

A Familiar “Infrastructure and Applications” Approach

With any new technology, there are opportunities in foundational infrastructure (making the technology easier to deploy or adding capabilities to what can be built for / with it) and novel applications (tools built with the new technology to help users achieve something they could not previously do).

This approach often starts by asking what new infrastructure developers are building, and then asking what applications can be built atop it. 

In XR, substantial existing infrastructure will be first-party specific to headset makers. So, it is worth considering what initial applications may be built on foundations purpose-built for available devices – and which use cases may find breakout success among users of those devices. 

XR applications

Consumer

Gaming appears to be the first breaking wave of consumer XR, and will likely lead the way for the foreseeable future. Unity’s 2023 report showed that more than half of all console developers are now building for VR platforms, too. It’s been said that “Every game developer wants to be a VR game developer – they just need to find a way to get paid for it.” This may not be a problem soon enough.

According to The Economist and Omdia, global gaming spending will eclipse $185B this year, with half of consumer spend going to mobile games. As AAA titles and boredom-busting mobile games alike are rendered in XR, it stands to reason that anyone with access to an XR device will prefer gameplay in a more immersive form – meaning that a sizable share of the gaming market may shift to XR in the next decade.

Gamified consumer training is already proving its effectiveness in athletics: thousands of baseball players, amateur and professional alike – including the reigning MVP – use WIN Reality’s Quest app to perfect their swings, with measurable improvements on their performance. 

We are also excited about consumer applications in more passive streaming entertainment, social content sharing, education, and commerce. Many of the hours of daily screen time spent watching asynchronous content – social media feeds or professional productions – or browsing e-commerce sites may feel more vivid in an immersive environment. 

Enterprise 

Hardware fragility and cost may prevent near-term widespread enterprise adoption of B2B XR across business types. Meanwhile, few people – including us – are excited about open-ended co-working in an immersive XR environment. 

But, there are impactful vertical and horizontal applications alike that may soon make enterprise XR familiar and essential in many use cases, especially at large companies. Horizontal enterprise tools may include general-purpose training and demo environments: collaborative spaces built to allow anyone to host a classroom or sales presentation. Early deployments of immersive training tools have shown efficacy in use cases as diverse as nursing, UPS driver safety, and law enforcement.    

For more specialized B2B applications, initial verticals may include architecture and interior design, product design and mechanical engineering, repair and maintenance diagnostics and installation, and healthcare diagnostics and treatment simulation – among many other sectors.

Key questions for XR application builders 

With any application, we encourage prospective founders to consider first-party risk: What core applications are platform creators likely to want to own? FaceTimeXR may be a more likely winner than a new developer’s video chat app for Apple RealityPro. But, a social app to flip through your photo library immersively, in real-time and alongside the friends in those images, may be less likely in Apple’s first-party domain.  

We also encourage XR builders to have an airtight “why XR” case for any application: what vital benefit does an immersive, co-present environment offer to your user over alternative interfaces? 

XR Infrastructure

Developer tools 

Wise founders will study the initial wave of XR adoption and ask which high-use applications are breaking, underwhelming, or borderline impossible on existing first-party or open infrastructure. Many of the most most compelling opportunities will be in bolstering core experiences: interactivity and copresence, audio, messaging, 3D asset generation, 2D/3D web interoperability, streaming reliability and observability. 

Monetization enablement 

An entire ecosystem of companies will support familiar business models, applied to XR use cases. While many elements of these business models may be unchanged for a new interface, there will undoubtedly be novel components. E-commerce checkout flows will feature transaction-splitting for social shopping. A wave of analytics and marketing tools will help businesses identify lucrative, impression-rich actions users take in XR applications, and usage-based billing providers will emerge to track and monetize novel ways of product use. 

Key questions for XR infrastructure builders 

Although removed from the end XR application consumer, XR infrastructure builders should start with this headset-adorned or AR-phone-app wielding user profile in mind. In a nascent space, an infrastructure builder needs to be sure there are enough end users who will in turn adopt what your own customers build with your tool or on your platform. Even if a developer can build something novel and powerful with your tools, your success relies on that developer’s distribution.   

===

These are merely a few of the possible areas to explore. The promise of XR lies in the experiences we cannot yet clearly see, and the infrastructure built to enable them. If you’re building now — send us a note to XR@pear.vc. And if you’ll be in LA for LA Tech Week, join us on 6/7 at our XR breakfast — there will be plenty to discuss the week of Apple WWDC!

Pear Competition: Harvard Winners

At Pear, we have a long history partnering with students to build the next wave of category-defining companies. In fact, we started working with the founders of companies like Branch, Aurora Solar, Bioage, and Affinity when they were still students.  Each year, we host Pear Competition where students can receive up to $100,000 to launch their startups.  

This year, there were five winners of Pear Competition at Harvard, and they received prizes between $25,000 and $100,000 each. These winners were selected based on their ideas across various fields from AI infrastructure to virtual reality and more. We’re excited to shine a spotlight on this year’s winners:

Civic Roundtable 

Civic Roundtable is a collaboration platform for public servants: a “Reddit meets LinkedIn”– designed with and for public servants. Founders Madeleine Smith and Austin Boral are joint Harvard Business School / Harvard Kennedy School of Government student with experience working in and with the public sector, where they saw firsthand the inefficiencies that state, local, and federal workers face in their attempts to collaborate. Civic Roundtable brings these workers together in one place to share knowledge and coordinate their joint efforts. 

Gigit.ai

Gigit.ai was founded by Harvard Business School student Inez Wihardjo, is a plug-and-play infrastructure to enable enterprises to own and train specialized AI models. Unlike existing players, Gigit offers an integrated self-hosted system of AI + subject-matter experts to ensure no hallucination, utmost accuracy, while preserving the data privacy of the enterprises.

Delilah

Delilah is a personalized AI assistant founded by Harvard College undergraduates Khoi Nguyen and Raunak Daga, two members of Pear Garage at Harvard. Khoi and Raunak are hackers at heart, and have built and battle-tested a suite of productivity tools with their target market of GenZ students. We couldn’t be more excited to support their mission to bring the game-changing tools of an all-purpose AI assistant to this demographic. 

BuyXR

BuyXR, founded by Harvard College undergraduates Sam Suchin and Will Schrepferman, is a virtual reality e-commerce platform that allows any seller – from the world’s biggest brands to limited edition creators – to offer their merchandise in an immersive, gamified world. 

Stealth 

More to come about our fifth winner when they launch! 

We’ve long believed that, with the right tools and resources, student builders can build extraordinary companies. These five Harvard Competition winners embody the spirit of student entrepreneurship and innovation, and we’re excited to see where they go!

Pear Competition: Stanford GSB Winners

At Pear, we have a long history partnering with students to build the next wave of category-defining companies. In fact, we started working with the founders of companies like Branch, Affinity, Aurora Solar, Viz.ai, and Bioage when they were still students.  Each year, we run Pear Competition where students can receive up to $100,000 to launch their startups.  

This year, there were four winners of Pear Competition at Stanford’s Graduate School of Business (GSB), and they received checks between $25,000 and $100,000 each. These GSB winners were selected based on their ideas across various fields from BioTech to Climate Tech and more. We’re excited to shine a spotlight on this year’s Pear Competition winners:

RadarTx

RadarTx is a biotech company that aims to revolutionize the treatment of a broad array of diseases by bringing precision targeting to gene- and mRNA-based therapies. Most current nucleic acid therapies suffer from a lack of specificity due to expressing their payload protein drugs in every cell – a problem that can severely limit their efficacy and safety. RadarTx has developed a proprietary RNA sensing platform that can precisely control the expression and activity of such drugs at the cell type or even cell state level, thereby improving the therapeutic potential of this important class of medicines.

Komo

Komo is redesigning how search and discovery works for both users and businesses. Komo’s platform is designed to help users find the right information faster and more efficiently especially when it comes to exploration. The company’s AI-powered algorithms deliver a personalized and interactive search experience, making discovery both easier and more delightful. This innovative approach has the potential to revolutionize the way users interact with digital content, making information more accessible to millions of people worldwide.

Stellarr

Stellarr is a talent acquisition platform that connects the right talent to the best opportunities in tech. Stellarr’s platform uses advanced AI-powered algorithms to identify the most suitable candidate for a particular job, reducing the time and costs associated with hiring. The platform also includes features like job matching and automated scheduling to make the hiring process more efficient for employers. 

Optimus

Optimus is a manufacturing technology company that is building the brains for advanced manufacturing. The company is starting with batteries and extending to other manufacturing verticals critical to the clean energy transition. Optimus’ technology is designed to make manufacturing more efficient and cost-effective, reducing the environmental impact of production while promoting renewable energy. 

We’ve long believed that, with the right tools and resources, student builders can build extraordinary companies. These four Pear Competition winners embody the spirit of student entrepreneurship and innovation, and we’re excited to see where they go!