Introducing Artificial Investment: A Substack on the Future of Investing
Introducing a new Substack about advanced analytics, GenAI, and investment decisions but mostly some hot takes on the Apple announcement
Welcome to Artificial Investment
Hello and welcome to Artificial Investment, a substack where I will share my insights and perspectives on how Generative Artificial Intelligence (GenAI), alternative data, and advanced analytics (AA) are transforming the world of investing.
Some of GenAI is hype, but I’ve seen lots of POCs and MVPs that do useful things at companies, which will save a lot of money or delight customers with cool features. GenAI is constantly improving. But even if it doesn’t, a lot will change even as the current technology flows through the business world.
The point of this substack is to share my latest thinking on GenAI and AA for investors. What new use cases am I seeing? What cool new analysis are we doing with alternative data? What are the latest GenAI trends? I often have a lot of early thoughts that aren’t well-formed enough to become a white paper but might still be interesting to investors who like to think about this topic. I’ll use this forum to put out these ideas. Hopefully, some of you will provide feedback in the comments.
Given the big announcements Monday from Apple, I wanted to share a few thoughts about some of the big ideas on display and think about the implications for PE investors and their portfolios.
(As an aside, how are we supposed to abbreviate Apple Intelligence? I think AI is already taken. Should we say AppleI?)
1. Creating new ways to use GenAI without using prompting using drop downs and icons
Several times in the presentation Apple made negative comments about how hard prompting can be. They are trying to build a system where people can use GenAI without writing a prompt at all. For example, you will right click on a draft email and then say “professional” or “concise” or “summary,” and it will work (hopefully).
Creating images and emojis seem to involve a system where you drag concepts around a circle and then only add a prompt if you want. It looks a bit like throwing ingredients into a witch’s cauldron and seeing what you get out. Will be fascinating to see if this will be intuitive for users, but I wouldn’t bet against Apple on UI design.
Another cool feature is that email previews will automatically show a summary instead of the first few words of the email.
For 18 months, I’ve been predicting that the future of user experience is ChatUX. I’ve said that every piece of software or website will have a box on the page that you can type into and tell what you want. Then, the tool will just magically do it. I still believe this will have value in a lot of cases. But, now I wonder if something like Apple’s hybrid, more graphically oriented model will end up being the winner, at least for B2C applications. A recent poll said that 47% of people had never heard of ChatGPT. It may be unrealistic to think that people will ever learn how to interact with a bot through text. I’m sure many of those 47% of people have iPhones and will benefit from these features without knowing how they work.
2. Bringing personal context into the responses
Another exciting innovation of Apple Intelligence is that it can bring personal context into the responses. Your phone will read every text, email, note, etc. to better understand what you want. An example they gave in the presentation was asking, “What was that book Alice recommended last month?” and it would know the answer.
This is a very important point for corporate use cases. The more you know about a customer, the more useful the models will be. For example, one of my favorite use cases is a bot that prompts salespeople every Monday to contact 5 people they haven’t talked to in a while – leads or customers – and draft the email for them. If you don’t have a lot of customer information in the CRM, these emails will be very generic, highlighting a new whitepaper or webinar. If you have a lot of information, then you might be able to say something like, “Last time we talked you said that it was crucial that we add feature X. Good news: we just put it in! Let’s get time to walk through it.” That’s a much better reach out, but it requires detailed customer knowledge in the system.
3. On device processing to reduce latency
One of the most impressive features of Apple's GenAI is that it can run on the device, without sending any data to the cloud (most of the time – more on that below). This should mean that the responses are faster and more secure. Apple can do this because they make their own chips, that are optimized for AI. This avoids network connectivity issues, privacy breaches, or server outages. Users can get instant answers to their questions, even offline.
The corporate applications of this approach may be exciting. For example, give each repair tech an app on their phone to help them troubleshoot that is fast and doesn’t require an internet connection, which may not be available in out-of-the way areas.
I’ll have another post on latency in the near future, but my quick thought is that people are remarkably impatient when chatting with bots and removing or reducing this friction will matter. It will be interesting to see how good Apple’s models are in terms of latency and output quality.
4. Choosing the right LLM for the question asked
Another innovation of Apple's GenAI is that it can choose the best language model for the question asked. If it’s an easy question, it does it on device as discussed above. If it’s a harder question, it spools up some Apple private cloud resources. And then, if it’s a real stumper, it sends the question to ChatGPT (either linked to your account or anonymously). Based on an announcement today, it sounds like it might also ping Google Gemini models in some situations as well. I will be curious to see how effective it is at triaging questions. Obviously, it will not be a good user experience if it does these sequentially. It will be very annoying to get a message after a minute that says, “Sorry, this was too hard for me. I’m calling ChatGPT.” Can the bot predict in advance which model it should use without adding latency?
If this works, I think it is an interesting technique for companies to consider as they build GenAI applications and features. For example, a model could estimate the size of the context window required to solve a problem (i.e., do I need to pull in a few documents, many documents, or a lot of documents to answer a question?). If it decides it needs 1-2 documents, it uses a model that’s optimized for smaller context windows. If it’s many documents, it uses a model with a larger context window, and then if it needs to bring in a huge corpus of information, you use RAG (Retrieval Augmented Generation) to store the documents in a vector database and pull into context window as needed.
I will do a post on multi-agent approaches and what they can do in the future. For now, I’ll just say that if you want to solve a problem by spooling up multiple agents, each with a specific role to play, it could make sense for some of them to use different models depending on the tasks.
5. Marketing privacy is important
If you watched the announcement live, you saw a lot of this:
Apple made it clear that none of your data would be available to anyone else. This is clearly a core part of their strategy. The idea of a robot reading all my emails feels a lot more comfortable when I know that robot is based on my phone. This approach may help users get comfortable with features that would otherwise feel intrusive or creepy given the personal context the bot will have.
On the other hand, I don’t know if people really care about privacy. People routinely use email services that read their emails to send them ads. Additionally, I buy a lot of alternative data for Bain, and many of those datasets include people who have agreed to share their full anonymized purchase history in exchange for a very small amount of compensation. I tend to think most people only care about privacy when there is a breach, but Apple’s techniques make that risk extremely small which is a big plus.
For companies, the privacy concern is much higher as they have lots of proprietary information that they don’t want to leak into models. In my first 6 months talking to PE Funds about GenAI, the first question was always, “How do I make sure my data doesn’t leak into the model?” I don’t tend to get that question any more now that we have many secure options for accessing LLMs.
I do wonder if one benefit of Apple’s approach is that companies will be comfortable turning these features on for employees. Many firms keep confidential data in a secure container on the phone. It would be great to be able to use Apple’s text editing and summarization features on sensitive documents in the container, but I don’t know when they will allow that.
Conclusion
I’ll be excited to see if the new Apple system lives up to its promise when it is released in the fall (although rumors are that a beta release of some features is coming in July).
I’ll be trying to post something once a week or so going forward. Please subscribe if you want to get these delivered to your inbox, and feel free to leave a comment if you have any questions or something to add.
I'm wondering if we will also see an enterprise use case that automatically scrubs and converts emails and IMs to match a preferred tone. Externally there is usefulness in consistency of communication, and internally there's potential HR and eDiscovery risk mitigation value propositions.
On the privacy front, I'm wondering if there are lessons to be taken from Google and Amazon's forays into smart home devices. Neither company seems to have realized their goals there, whether those were revenue or data collection, and both companies have seen widely-publicized layoffs on those teams. I've personally already made the general trade off of convenience in exchange for my sense of privacy, but the camera on my Alexa device remains covered and with restrictive settings - so perhaps a different standard for devices in general.