
Life on Mars - A podcast from MarsBased
Life on Mars - A podcast from MarsBased
Corporate Innovation Summit: How AI is moving from obedience to autonomy
What happens when AI stops just following commands and starts making decisions on its own?
At the Corporate Innovation Summit, industry leaders dive into the world of agentic AI—intelligent systems capable of autonomous action. This first panel explores:
🔹 NVIDIA’s journey from gaming to AI dominance and how it became an infrastructure powerhouse.
🔹 AI in law firms—from transforming due diligence to redefining client interactions.
🔹 Finance’s approach to AI, balancing accessibility and security over chasing top engineers.
🔹 The real meaning of "open-source AI" and the challenges of transparency and control.
Join us for a deep dive into AI’s rapid evolution, how industries are adapting, and what’s next for businesses embracing autonomous intelligence.
🎬 You can watch the video of this episode on the Life on Mars podcast website: https://podcast.marsbased.com/
Hello everybody, I'm Alex, ceo and founder of Marspace, and in this episode, we bring you one of the conversations we had at the Corporate Innovation Summit. The Corporate Innovation Summit is one event organized by Marspace that we have been doing since 2020, in which we bring together into a room the founders of companies like startups and scale ads, and the decision makers and big corporates in the field of innovation. We've been doing this since 2020 to bridge the gap between corporates in the field of innovation. We've been doing this since 2020 to bridge the gap between corporates and startups and also to make more business connections happen.
Speaker 1:In this case, we bring you the first panel of the event focused around agentic AI, and we'll have a very interesting conversation about the legal implications, how to make that happen, whether it's surviving the hype or not, and the technical viability to commit fraud and protect ourselves from fraud as well. So, without further ado, let's jump right into this episode. Without further ado, ladies and gentlemen, let me introduce to the panelists of this first session of the day. So, people sitting in the front row pretty much everybody of you is in this super crowded panel. It's going to be so interesting. So please give it up for Asier.
Speaker 1:Anthony, carlin, greg and Sarah. Please join me on stage. Colin, greg and Sarah, please join me on stage. Good, so everybody pick your seat according to where you're. No, actually we're missing one. I'm missing. There was no space for me there. I'm not important, come on. Well, welcome. We're missing one mic. I guess we have Not. Everybody doesn't have one. Let's kick it off. So welcome to the show, asier. I guess most people in the audience will be like NVIDIA. Let's talk about stonks.
Speaker 3:Now it's on.
Speaker 1:We could be talking about the other hyperscalers. We could be talking about whether Asians are important or not important, whether they're going to be solving what we're doing or giving us more trouble than they actually solve. But some people are like are you going to ask them about valuation? And do you feel the pressure of getting constantly asked why NVIDIA single-handedly moves the needle in the stock market? And it's not like the big players anymore, it's just freaking NVIDIA.
Speaker 3:But yeah. So basically, when you say that every company in Mobile World Congress say that they are the best one, they have the best engineers, well, we do so. I think we are the only one in that case and, of course, the market likes that. I cannot speak about what people are using their money for, but I think we are doing a very good job. We try to recruit the best talent here. We have some people applying to NVIDIA here in the audience and basically we started from a. We started with like, like now. We started having fun. We started with video games and then we realized that that graphic cards are very, very good for almost everything, even simulating quantum computers right now, even creating AI agents. So, yes, I think we are doing a very good job and that's why the market well, the market people likes us. We also like people and we also have a very good internal culture. We like to hire the best study, we like to take care of everyone and I'm really happy at NVIDIA.
Speaker 3:Before I was at IBM on Quantum, I was happy Not really happy, but I was happy, but I was happy, I was happy, it was good, it was good. But so the good thing is that I have both. I have seen both worlds the Quantum world, which I love, quantum world Probably, if NVIDIA has a quantum computer in the future will be on quantum again, because I love quantum. But also AI is very exciting. We have news almost every week. Everything is super exciting. I recommended this a guy there. Also we are singing I don't know. 10 news per day and each week there is something really new and we will have the time to to follow this pace.
Speaker 3:I don't know if you feel it, but too many things are happening right now. It's very exciting because almost every idea you have now you can prototype it in one day using agents, using chat, gpt or whatever service you want. So I think we are living a very interesting time right now, for quantum is the same almost. Probably in a few years it will be much better. We are always waiting for the quantum applicability. I'm not going to give a prediction on when quantum is going to be the thing, because Jensen did that and I'm not going to do it this time. So I hope we have quantum soon. I think AI is going to help to understand quantum computers much better, because now quantum circuits are not super understandable and there is something called the quantum intuition, which is something you build by interacting with quantum computers, and I think that's something AI can develop quantum intuition to create something.
Speaker 1:Thank you. Certainly, I don't know you got the best developers, but at least you got the best mics, so I'll grant you that. No, there's another question for you actually. So, um, going back to what you mentioned that we can build more stuff than ever, more rapidly than ever that creates some sort of like sensory overload or the feeling of like getting burned out by ai right, the feeling that you can do almost anything. Right? 20 years ago I couldn't do graphic editing, video editing, writing a novel, creating images and maybe even like doing other crazy stuff, but right now I can. So I've got a ton of abandoned side projects. You're quite the multitasker and the fire starter. How are you dealing with this overload or this sensation of having too many projects that you cannot handle?
Speaker 3:Yes, so we were speaking before to talk a bit about that. Yes, I understand you. I try to have a very good garbage collector and forget quickly about all things, because if not, I cannot deal with with everything. But it is true that before I had more time to to explore projects and now I don't have that time because there are too many things coming and it's it's time something new comes, there is a new idea that you want to explore. So I think we don't have a lot of space for creativity now. Uh, and also because I think it's not trendy now to get bored, so everyone is going to the phone all the time. So we have too many engineers on the phone right now or dealing with too many projects and we don't have the time to stay in a project for a long time. I think that's a bit of a problem right now.
Speaker 1:Anthony, speaking about learning so many things, one of the sectors that has got to keep up with regulation, technology and what's going on. It was like lawyers, because you're supposed to protect us from fucking up right? So more than ever, you have to catch up with reality. So how do you do it internally to learn about the implications or like, even like quantum, but agent, so AI and the passive must have been like blockchain, vr or whatnot. So all of these like new waves that come with, like with a smaller cadence right now, like they come more frequently than ever how do you keep up with that internally?
Speaker 4:Yeah, we just make sure everyone's really scared of what could happen. Yeah, I mean, we're in a strange place because we dealing with large language models. We sell language right. That's our primary thing we sell. So on some senses, you might say we're screwed right.
Speaker 4:So there's a real chance, but it's also very exciting. We're definitely not bored right. There's a lot for us to focus on about how we change our business and how we think about the model of the legal industry for the future. And I think yeah, it's kind of hackneyed to say it, but it's obvious that, whilst it's definitely a challenge to us that we suddenly have a tool that can do a lot of the certainly the sort of grunt work, the basic legal work that many of the people have trained themselves on, it's also a fantastic way of delivering great work to our clients in a super efficient way, and if we can harness that, if we can use it in a way that is going to set us apart from the next lawyer, we're going to have a business for the future. But it's definitely something that occupies our mind on a daily basis. We have lots and lots of time and energy worrying about it.
Speaker 1:So how do you use it internally? Have you got any specific examples about, specifically, agents? How do you use it? Your team, your departments? Hey, we build this tool, we build custom GPTs, we build some agents for you know.
Speaker 4:I mean there's a lot of paranoia in law firms about using large language models, because a lot of what we do is obviously super sensitive. It's confidential. What we have to be really careful of is allowing client confidential information to be put into a model where it might ultimately be available to anyone and everyone, right? So we have to be really careful about how we're doing it. So there's a number of models that are being sold to law firms. One of the most prominent ones, which maybe no one in this room has heard of apart from lawyers, is one called harvey.
Speaker 4:Harvey is a tool that is essentially based on chatT, but it's kind of like a walled garden approach, so we can have confidence in putting information into it that it is going to remain confidential. It's trained on our precedents, it's using our voice, as it were, and we're starting to get people to learn how to use this as a tool for document review, for document creation. There are some really obvious use cases. Due diligence is a very obvious use case. It has a tool where you can throw in a thousand documents and say tell me where the risks are in each of these documents and it will produce a very simple report for you very quickly. Equally. Hey, the lawyer on the other side has written back to us and he's marked up the document in this way Tell me all the key points and it spits out a nice issues list straight away. So there are tools like that that are very helpful. There are, obviously there's a whole multiple, a whole range of different tools that we're using.
Speaker 4:I'm trying to actually, I think the document review. Stuff is kind of done in a way. It's kind of it's a bus flushes in it, sort of it's there, it's happened, it's changed our world and we've got to get used to using it. I think one of the interesting things for me about how we can use these tools for us to set ourselves apart is and people talk about this all the time but how do how, as individuals, do you teach the lawyers to start thinking more laterally about these tools and how can you use, how can they use them to add more value to their clients and to add more value to be more valuable themselves?
Speaker 4:So a really simple example I give to the lawyers in my team is that if, when you're thinking about business development I was thinking about an event the other day I'm really interested in neuromorphic chips I wanted to know about neuromorphic chips. How could we do an event around that? So I asked put into one of the models and said right, give me the top 200 companies in Europe who are involved in this. Who is the CEO? Who's the CFO? Who's the GC? Give me their LinkedIn profiles. Right, who's the most interesting talker in this space right now? What can they talk about in front of a group 100 people that will fascinate them? Who should be on each panel? I had an amazing event inside half an hour that would have taken our business development team three months to pull together. Um, so, using thinking more laterally about how we can use these tools to make ourselves just that little bit more super human, I think is a really important skill that lots of our lawyers are going to have to learn.
Speaker 1:Should have done the same instead of putting in the three months of work to organize this event that little bit more superhuman, I think, is a really important skill that lots of our lawyers are going to have to learn. Should have done the same instead of putting in the three months of work to organize this event. So for next year, there you go, but you'll be on the panel. No jokes about this. Moving on, caroline, it's the second time Moody speaks at the event.
Speaker 1:Two years ago, sergio was here, but we nerded out more about the collaboration between AI, so AI startups and corporates. In this case, I'm much more interested in how do you measure the productivity? Is there any sort of specific API in the company that you use across departments, because so much money has been deployed, as in like, hey, everybody uses AI right now. Everybody uses Copilot, everybody uses Raycast AI, everybody uses whatever. Then you find out that, no, not a lot of people use it. Not a lot of people are actually making the best usage of it and you're overpaying, right? Have you identified in which parts of the workflow departments is best suited for that? I'm not sure if my mic is on.
Speaker 2:No, if not, you get mine. So it's a very good question, and I know we were told not to say we do the best things, but you can come on.
Speaker 2:But I think we were very well. I know we were very early to the party on Gen AI, so early doors, 2023. Actually, you know, sergio, who you hear from later and our team built Moody's co-pilot. We were pretty original with our naming and initially our KPIs were actually around adoption and all we wanted was just please come and use our platform. This would be brilliant, and we spoke a lot about having 14,000 innovators, and so that was our entire workforce. We wanted them to actually use Gen AI in a safe way. We wanted them to be comfortable with it and that was kind of, you know, of utmost importance. But then you sort of get into 2024 and you're like, okay, where's the tangibility here? And I think that's where a lot of organizations either. Maybe they didn't get there in 2024, but they're definitely getting there now, and so we have a very active population. And the reason for that was there was sort of endorsement from above, from the CEO of the corporation, who did role modeling, like showing what, how he used it.
Speaker 2:We embedded it actually into tools people used. So we embedded it into things like Teams and Slack and had a web interface. We continued to evolve it as well with lots of new skills and we did a lot around education. But I think we then had to look at actual workflows. It wasn't so much individuals having to actively go to a destination to use Gen AI, but how do you understand a workflow and have the subject matter experts working with our engineers and finding the best solutions. We built out specific tools, like I think a lot of organizations have around. We built a customer service assistant. So specifically looking at what are our biggest job populations in our organization, where are the biggest opportunities that we can identify and match Gen AI capabilities with, and then basically building out the tools.
Speaker 2:But I think one of the really important things here, if you're at this stage in your organization, is how do you actually benchmark? If you don't do a benchmark first, then you kind of don't know what the win was. And I think one of the other areas is as well. We had a lot around accuracy and I think people have moved through this now. But I think early doors, those expectations of a hundred percent accuracy and we know humans we like to think we're perfect, but we're infallible. You know we are fallible and so that was another area that we started to kind of benchmark what's a human performance and so what would be acceptable for Gen AI in that context, and it depends on what the task to be done is, of course.
Speaker 2:So I'd say those were sort of areas. We, of course, you know, we see with our developers that they they're able to get a lot of benefit out of github, copilot, and that's been kind of quite phenomenal as well. We're seeing with sales teams, we're seeing with legal teams and I think, as you start to see some of the skills improve, you know, particularly around data, the ability to compare documents rather than just have chunks of documents, some of these things actually open up the world of opportunity for internal efficiency. So pretty broad. But I think the tangibility is the bit that we're really doubling down on now and trying to say did it bring value? Because I think the market itself is questioning that quite a lot.
Speaker 1:Did it bring value Because I think the market itself is questioning that quite a lot, and actually you went over it very rapidly, but so you built your own co-pilot. Right, we did. That's something that you said like, oh we, we built a co-pilot.
Speaker 2:I didn't personally, but we did.
Speaker 1:No, I mean the naming's great. I mean CERCIO has got worse ideas than this naming, so no worries about that. But what I wanted to stop here is because a lot of people have always had this kind of information available. Right, you've got your CRM, you've got your ERP, you've got your data room. Whatever Truth is, you never used it because it was scarcely available. It was costly to. Maybe. You needed an engineer, you needed to build an API, you needed to do something. It was expensive. Now that it's there, a lot of people are not using it because it's so easy to access. It overloads you with information. Are you encountering this problem that the excess of transparency is also causing problems you didn't know you would have with that?
Speaker 2:I mean, maybe I'm not answering exactly your question, but I personally feel that the future of Gen AI should not be us actively, it shouldn't be about adoption and it shouldn't be people actively having to use Gen AI. I'd like it to be passive. I'd like us to predominantly be embedding it into workflows. I would like your co-workers to essentially be agents. I'd like you to have human co-workers as well, but I think that as we move into agentic workflows, that you won't have to actively go and write a prompt, we can actually work out how to solve specific workflows, but also unique workflows and personalized workflows using agents. So I think, longer term, the kind of need for the majority of individuals to actually proactively go and write a prompt and do lots of steps I mean, that's the whole evolution of Gen AI that we're seeing at the moment is moving beyond that kind of assistant into autonomous workflows. So autonomous assistance.
Speaker 1:Great, thank you, craig. Let's Great, thank you, craig. Let's move over to you Also in the financial sector. Right, financial sector has been an early adopter of AI and quantum, probably because the kind of problems that you're trying to tackle are very, very complex. They require large sums of investment, big teams, and I don't know, like, if you can talk, what? Can you talk about the process that you're personally involved on and how this kind of investment in bleeding edge technologies affects your hiring strategies? You're able to attract more talent so that you can you're able to compete with NVIDIA's best engineers in the world? Oh God, I'm not going to answer that at all.
Speaker 6:So I have no idea. So what but I'd like to double down a little bit on. This is that we also created what we called ours initially, because I was in the automotive area of S&P. We called it Autopilot, so a little bit more original. And then we vibed off of the kit car from David Hasselhoff fame and we even had it, so that it did the woo-woo thing. It was really cool.
Speaker 6:But what we decided very early on is that no one knows what this technology means. So, even though we're all talking here, we're all here because of, for the most part, generative ai later quantum, I know, but we're all here because of generative ai and trying to figure out what does it mean in terms of these legacy stacks that we have? So we, very early on, decided okay, there is no clear-cut way that we're going to deploy this bleeding-head technology. We need to get it into the hands of the people who have the problems, and so we don't think of it so much as being the best engineers, but instead, how do we become the people who are delivering the water supply to a garden? So we then, once you figure out how to irrigate generative AI to everybody, make it free, make it really accessible, make it safe so that they don't have to worry about leaking tokens out into the Internet. Once you have that, then people begin to build, and so that is more of the strategy, instead of trying to figure out how do we get super good engineers, it turns out the problems are more info, security, user experience, ui, and so it's not so much that we're trying to find people who are experts in large language models.
Speaker 6:Our first problem was just trying to find React developers. So that's a completely different way of thinking about it. I let the folks who are using NVIDIA chips make the LLMs better every other week, and instead we concentrate on how do we get this technology into the hands of the people who have to figure out how to use it. And then one other crazy thought If we were to shut down all large language models tomorrow, I think for the most part all of our companies would be fine at this point, but if we were to take away Excel from all of us, we would collapse. So we are on this long road to trying to figure out how do we use technology, put it into these old, old workflows and how do those transform over time. So I feel I'm more like someone who's just supplying water to a garden, rather than something more sophisticated than that.
Speaker 1:How do you do, like? You mentioned a funny that you mentioned the the aspect of cybersecurity or security, like data compliance, data leaks and whatnot, and that's part of the conversation. We're talking about AI. We see these major scandals where, like big companies leaking data oh, this was used here, or we found out that this company has been illegally using scraped data and whatnot. What kind of strategies do you have to produce your guardrails? How do you evolve it? How do you make sure it keeps up with the pace of the market?
Speaker 6:Oh, I mean, we started with that, so the first thing we had to do is make sure that it complied with everything we have in terms of our regulation compliance and information security, but that's something that we're already very good at, so we already have existing structures, we already have existing lawyers. We have everything we need to make sure that we're compliant and that we're secure. The other thing is like we take everything onto our own infrastructure. Once you have that, then the rest of it is these other things that are new, and so the new stuff. There isn't a playbook to go on, so that's why you really have to get it out into the hands of that segment of your population that is eager and innovative. So how do you enable the innovators within a large organization?
Speaker 1:Sarah. Let's move to you. After your experience of having worked in some of the hyperscalers, now you're creating a venture studio. You have created a venture studio you have created a venture studio in and, finally, one of the things I like more is the one of the things you're passionate about is the open source principles of governance and how that applies to ai, because I didn't see this coming, but a lot of corporates are actually giving to open source like they have never before, right, and so what has been your role up until now and how do you see this going?
Speaker 5:in the land of open source plus ai. Um, there's been some really interesting work over the last 18 months about what it means to be open source and ai, because um meta came out first and said we are open source ai, but that was before it was really reasoned over in any sort of way, Because when you look at a model now that's more akin to a binary output from a compiled software project and so looking at, is just licensing the binary or, in this case, the model, open source enough? Because the principles and the freedoms behind open source is that it's reproducible, you can derive new works from it, you can inspect the code and you can work on it in conjunction with the other authors, and I don't know of very many open source models that meet those requirements from the very high level. Now the open source initiative has spent the last year and a half working on and they released it in October the first version of AI, of what is open source AI, and it speaks to it in terms of a spectrum, so some.
Speaker 5:To be perfectly open, you would also have to have your data licensed as open. You'd have to have the infrastructure that builds the model licensed as open and available publicly both of those so that someone could reproduce what was built and could reason over it, could amend it, could change it. So there's been a lot of discussion about that and the spectrum that the OSI has brought forward. There are some I'll call them dogmatists about open source that are not keen on the fact that data still has to be obfuscated in a lot of cases. But it's an ongoing process and this is just the first version.
Speaker 1:And the other thing I wanted to commend with you is a lot of people. You know the talk out there is usually how can you use agents to do like, oh, sales goals and some processes are too complex and we're not to make more efficient workflows Okay, but one of the underlying truths of Efficient AI is we have to protect ourselves from these agents because they're being used for scams, fraud and without like. Every day I receive a freaking SMS from Coinbase, like your account was accessed from this random country out there. Call us immediately and give us your credentials right. Passwords please, passwords, please. And if they record you, they can actually make a call with your voice because they've been training your model with the podcast that you have spoken in and stuff like that. Can we talk a minute? Let me stop a minute or two here to talk about this, because you look very passionate about this yeah, um, securing agentic ai is a really big, open question right now.
Speaker 5:um, many of the um, oh, so many different directions to go with this. One of the projects that I work on now is called the Coalition for Secure AI, and, while it is not exclusively focused on LLMs, it is actually hyper-focused on how do we make sure that we take existing controls and extrapolate them to correct usage in LLMs and other types of AI, as well as what are the new controls that we need to develop for things like agentic AI, because you didn't used to have to have your piece of software be able to call a lawyer and say, am I allowed to do this? But your agent might need to. So what are those controls that need to be brought into AI in a meaningful way in order to make it secure?
Speaker 5:We have four different work streams right now. One of them is looking at the software supply chain security, which is actually also data supply chain security for these models. One of them is looking at preparing our defenders for the new world that has AI in it. I've heard some defenders recently say I have to treat any AI model as a hostile entity in my network because I don't know what it's going to do. I can probabilistically say it might do this, but I still need to mitigate. We have one work stream that is looking at the security risks, the new and varied security risks of them. This sounds terrible, but it's all stuff that needs to happen with every new piece of technology, by the way, so it's not sectors as well as, or work streams as well as. What does it really mean to be agentic, and how does that meaningfully work within our structures and corporations today?
Speaker 1:And coming from open source. I don't know about you, but I would feel way more secure if we didn't allow the big tech to dictate the principles of where AI is going right. They can put the technology, they can make the progress happen, but it should be more democratic. I don't know if you had any thoughts on that.
Speaker 5:This is one of the reasons I am very active in the Coalition for Secure AI because, yes, it has a lot of big tech companies helping fund it and participating, but it's doing all of this work in the public. So if you want to come participate and say, wow, that work stream paper that you wrote completely misses this chunk of my worry. Please come participate, because we need more people from the outside helping us reason over this and decide what the new model the new models sorry that sounds what the new frameworks are, as to how we address these, these concerns, and and how we move forward safely with our technology.
Speaker 1:How do you keep up? So, basically, last week it was a whirlwind of big news dropped by Microsoft, Amazon NVIDIA. Every big player dropped something Like you know the new quantum advancements by Microsoft, the new models by Amazon. Nvidia announced the NIMs, if I'm not mistaken. The NIMs, yes, the NIMs, right. So can you talk about that and how to keep up with it? How do you make it so that we can keep up with all of these, knowing that it will be obsolete in a week?
Speaker 3:Yes, so basically you only need to remember two names is NIMO and NIMS. Basically, nimo is where you have some data and you want to train a model with that data. Then let's imagine you want to train a model for finance. You have all the finance data, you do the fine tuning and then from nemo you can create a very small package called a neem that then you can use it as a piece for an agent, that an agent can call that neem and use it. So basically you can. You can use the neems that we already have. We have plenty of neems for different things for text, for voice, for video and you can also create your own name based on on your data. So basically that's the way we are dealing with agents or agentic workflows. We have this called names and it's quite easy to deploy because we have everything to deploy it almost in one click. You need to read a bit through the documentation and almost everything is kind of free.
Speaker 3:And also, when you were speaking about open source, a lot of them are based on open source models. So, for example, we have Nimotron, which is a fine-tuned version of Lama, which is an open source model, and we are also very supportive with the open source community. So we have also the Jetson device, which is before I was working in the NVIDIA robotics team in NMEA, and we have also the Jetson device, which is before I was working in the NVIDIA robotics team in NMEA, and we have these small devices which are embedded devices for robotics, for robots, and almost all of them are based on open source software and there is a big community on open source and we love that community. Of course, everything is not open source at NVIDIA, but we are trying to move little by little closer to the open source community, because we know that AI revolution has come from the open source Papers researchers, universities so we are trying to give back to that community.
Speaker 1:If anybody on the panel wants to think about one big fuck-up they've done with agentic AI. While we do a round-off one of the things I'm most interested about and a lot of people in the audience are like yeah, that's fine, but how do I work with these people, how do I work with S&P, how do I work with NVIDIA and all that? If you can say really quickly how to get in contact with you, then we move to each one of us.
Speaker 3:I'm pretty public, so it's very easy. You can find me on. Yes, basically, if you have any really good idea or workflow that you want to test, let me know and I can put you in contact with anyone in NVIDIA for that, and also there's some people here if you want to speak later from NVIDIA. So there is also the EMEA Robotics woman here sitting there. So if you have any project related to robotics, there it is. If you have any project related with AI agents or anything, let me know and I can move the question to anyone almost any of you.
Speaker 3:So use that you are a few people, so we can work directly here.
Speaker 1:Tony.
Speaker 3:Thank you.
Speaker 4:The question is how to get in touch with me. Yeah, exactly, and why would they? Why on earth would you call me?
Speaker 1:When you fuck up. You will really need that contact.
Speaker 4:Yeah, I'm the guy in the firm who's helping founders grow and scale businesses through venture finance and hopefully ultimately helping them secure an exit. You can find me on the website Anthony Waller at CMS. Tony Waller at CMS Easy. We're open. Very happy for you to get in contact.
Speaker 2:Helen, you can find me on LinkedIn as well, although there is a very successful TEDx speaker and I'm not her with the same name. She looks a bit similar to me as well, so it's a bit of a problem, but other than that, we also have recently joined Finos and the co-pilot Moody's co-pilot that we talked about earlier. We're working with Finos and the co-pilot Moody's co-pilot that we talked about earlier. We're working with Finos and making that code available for organizations who literally kind of want to leapfrog in their journey in Gen AI. So have a chat to us afterwards, but find me on LinkedIn, and Sergio and Robbie are here from Moody's as well.
Speaker 1:Thank you.
Speaker 6:Right, greg Mount, also on LinkedIn. There's another Greg Mount. He runs a hotel chain. That's not me. I can't get you a hotel room in Prague. I'm no longer S&P, I'm now Kensho. Kensho is owned by S&P, so it's just an AI area within S&P. I'm very interested in the fact that there are not that many people who seem to be talking about applied AI at businesses. So I hear a lot about the tech. I hear a lot about the chips. I hear a lot about different approaches. I haven't heard as many stories out there about how are we applying this at businesses or how are we looking to help businesses apply this technology. I think the change management that's going to be happening at organizations is going to be immense, and I think the way that businesses structure themselves is going to change dramatically, and we're just beginning to figure those topics out.
Speaker 1:Speaking about competition on the names, my surname is Rodriguez, so Alex Rodriguez, most famous baseball player of all time, according to somebody who doesn't like baseball because outside of the US, who does? South Korea?
Speaker 6:What's that? South Korea, oh yeah and. Japan. We have Japan here. Japan baseball, tokyo Dome.
Speaker 1:Amazing. Someday I will be overbeating him on SEO, but it takes a while, right.
Speaker 5:Sorry, so I'll point people to LinkedIn as well. That's super easy and I usually come up in people's searches, but you will need my email address. And GenLab is a small enough venture studio that it's just my first name, excuse me at genlabstudio, so I can be found there. I can also be found anybody else didn't mention this on Mastodon, which is Sarahahnavotny, at mastodonsocial, so I can be found there. And then was there another oh, how or why might you contact me? That would be why, as a venture studio, what we actually look to do is make very small investments in existing companies that can become building blocks for real problems. So we should talk for our strategic investors in the fund.
Speaker 5:So generally, as a venture studio, we start with a problem and then we try to work to solve it, and so then we spit out companies a little bit later, we attach founders a little bit later than usual. So we're not usually funding a hoodie and a founder or a hoodie and an idea, but instead we're building minimum viable products and then saying oh, alex, you've been working with me on this and you've been a great advisor. Do you have an interest in maybe being CTO over there? So we bring people in along the path and then put founders on later. So if people are interested in talking to me about really world use cases or have really interesting building blocks that might go into solutions that are focused on critical infrastructure and the tech intersection of AI, distributed autonomous systems and cybersecurity.
Speaker 1:Awesome. Thank you very much. Well, sergio is going to be coming here to prepare the demo. It might take a couple of minutes. If somebody wants to share a fuck-up, somebody has thought about one of them. Anthony, last year you shared one. You're not forced to do it this year again, but somebody wants to share a fuck-up or a funny story with?
Speaker 5:Jen AI, I was going to say, but lots that aren't AI yet.
Speaker 1:Okay, might be something else, but it has to be yours, it's a long career. Yeah, exactly, yeah, else, but it has to be yours. It's a long career.
Speaker 4:yeah, exactly, uh, yeah somebody wants to go for it while he rebirths a couple minutes, I can tell you about one of the worst days um at work, which wasn't really my fuck up, it was kind of my team every year so I was.
Speaker 4:This is really early on. So when I, when I first joined the firm um, it was dot com, right, who remembers that? That was a long time ago um, and I was acting for one of the very early travel businesses, um, that was selling to another travel business to make a travel business that you will know uh, today. Um, and the business I was working for, um, some of the shares that they were selling were bizarrely represented by bearer bonds right, people who know those. So they were selling were bizarrely represented by bearer bonds. People remember those. So they were bearer shares. So the shares didn't exist other than a piece of paper. If you didn't have the piece of paper, you didn't have the shares. So they lived in a bank, in a safe, very secure, no problem.
Speaker 4:And we were completing on a Sunday because, because you know why not, that's what founders like to do is, uh and um, and I had to arrange for a secure van to bring the share certificates to the meeting. That was my one job, I was very junior, all right, so I arranged it, got a secure van to turn up the meeting. Everyone was supposed to be there for 11 o'clock nice civilized completion on a Sunday I got the croissants and the coffee out and everyone was very happy. After about an hour, everyone's sort of looking at each other saying, well, where are they? Where's the certificates? No, the croissants were stale by that point, but they were there Two hours, three hours, no share certificates. So I started slightly frantically calling the security team and they didn't know where this guy had gone in the van and eventually I got a very sheepish call back to say he won't be turning up today with the share certificates. I'm like, well, why is that? Yeah, he was held up and was robbed at gunpoint.
Speaker 1:I thought you had forgot to ask.
Speaker 4:And they were gone. They were never found again. It was an awkward conversation.
Speaker 1:Quite the.
Speaker 4:Sunday. Honestly, someone must have known it was happening. That's the other weird thing. What was weird is they never turned up with them. You thought they might have turned up and sort of taunted us at the window, but no, they just walked off. They must have thought there was something else in the van, I don't know. But anyway, there we are. Someone else's fuck-up, but it made me feel pretty bad.
Speaker 1:Please. But anyway, someone else's fuck up but it made me feel pretty bad. Please give it up for the amazing panelists and for everything they've shared. Thank you very much.