Terminal Tech Talks – El Software detrás de los Vehículos Autónomos

Hello everyone, good afternoon

I am very happy to see you all here,

even if it is connected through a computer, but the most important thing is to do it from the safety and comfort of our home At Terminal, we believe in a world without geographical barriers, we work every day to connect talent with the most innovative opportunities, no matter where you are in the world, and above all, without having to move from people and comfort in which you live in and connect every day with your families, your friends, and your passions That is why we do these Tech Talks, but now, in Spanish to be able to find the talent that is doing those incredible things and in that way to be able to inspire us more to accomplish our goals If you don’t know Terminal, the opportunities we have for you, and the benefits of being part of this community; we have two options Number 1: Go to our website Terminal.io/Openings Number 2: At the end of this very entertaining talk we are going to publish a link in the different chats that we have either on YouTube or on Facebook, and so that you are connected for a question and answer session

with our Recruiting team You don’t have to be looking for a job to be able to access it, it is a time for all of us to have a great time, and we can all talk to each other I do not want to go any further without thanking the entire Terminal team in Canada, Mexico, and the United States, who made it possible for this transmission to look and feel as professional and pleasant as possible and also to all the communities, universities, people who supported us throughout the American continent Please use the chat tool Make your comments and leave us your questions Talk to us, no one is going to shut you up There are going to be opportunities to ask some of those questions for Andres and Daniel, but now, let’s get down to business, Andres, and Daniel Daniel, “paisa” (from Medellin), a very good friend, he is Terminal’s Expansion Manager, he is helping us to fulfill the mission he has of unlocking more and more places throughout the American continent in order to reach more communities And before, he had the opportunity- Sorry He had the opportunity (Intermittent signal) word for you to make the presentation of our great guest Andres Morales Thank you Thank you very much, Oscar, My name is Daniel Vargas, I am from the Terminal Expansion Team Our team is in charge of the strategy and execution of opening new markets to offer these job offers to local talent Today I am very happy to be here in this first Tech Talk in Spanish having conversations with Latin American talent who works at Tech, mainly software engineers that have been making a difference in their respective fields Our guest today is my former partner in Zoox, someone with whom I share the same nationality, his name is Andres Morales, today we will know a little about his education and his work experience in Google and how he got to Zoox I feel a lot of admiration for Andres, because through artificial intelligence he and his team work on the difficult task of literally predicting the future, and how the robot must navigate safely through public streets I want to thank today’s audience for connecting, we hope you learn something new and be inspired by Andres’s story Andres, without further ado, welcome, it’s a pleasure to be here with you, how are you? Hello Daniel, thank you very much for having me, I am very happy to be here Great, brother Well, I have some questions to start with, in this first part we want to understand a little more about your background and your history, so the first question with which I want to start this talk is: When did you decide you wanted to study computer science? Hello Daniel, well, as I told you, computer science has always been fascinating for me since I was young, I always loved video games, I loved the feeling of being able to be anywhere in the world with my computer and having the power to do computing, and to navigate and to visit different places, all from the comfort of my home, and today that is an even greater motivation in the situation we are in, so yes, computer science has always been a great passion for me Great Your story seems quite interesting to me because you went straight from high school to Stanford Can you tell us a little about that transition and how did you get to do that? Of course Always trying to be very studious at school, studying hard, trying to have extracurricular activities and when we got here the transition was very difficult, obviously a lot of work, a lot of learning, a lot of knowledge, meeting different people, understanding a different culture but always knowing who I am, with the goal of being an engineer and to be successful in my career Great Your first work experiences were in Stanford, in the artificial intelligence lab, and also with the Computational Research Center of the Army,

which is the American Army How did you get these opportunities? Well, Daniel, I always wanted to do some kind of research in the summer, being a student, and the way I found to do it was by always being curious, looking at the teachers I liked, looking at the research they did, and through them start talking to the people, start making connections, then just after a class I approached one of my teachers and said hello, I find everything you are doing very interesting, your research, I would like to be part of it; and in general, they were very kind and gave me the opportunity, obviously, they interviewed me and in the summer I had the opportunity to carry out some small projects that impacted my education in a very significant way It is very important to do internships, to be in a situation that is not entirely academic in order to learn a little what it is like to have an open project in which you cannot always have a correct answer Can you tell us a little about one of these professors who is the renowned Andrew Ng, can you tell us a little about the history and what he has been doing in the field of artificial intelligence? Of course Andrew is one of the biggest names in the world of artificial intelligence, I had the honor of being able to work with him and his graduate students, and yes, he has been making a difference for over 20 years in artificial intelligence, and also he has opened a place to do research in Medellin, he is a person who is thinking about the future, he’s thinking about how to open artificial intelligence to other markets, to other people It was a great experience with him, and I worked with one of his students and he was trying to make an algorithm to investigate how to translate an algorithm that was used for images to an audio format to be able to know what is happening in the audio in the same way that we know what is happening in the photos Can you give us an example of how that technology is being used today, those projects that you were working on? Of course Obviously, the project that I worked on did not end up being what is used today, today there are algorithms that are a little more advanced, but in general, it was to try to carry out the detection of the human voice in an audio file, and to be able to somehow get to understand if one is saying words, begin to understand them and translate them so that the computer can understand them, so nowadays products like Google from voice to text, the Dragon NaturallySpeaking program for example, all of those are natural language processing applications and it is a very fascinating field in the field of artificial intelligence Andres, how did these work experiences influence your professional career? Like I said, Daniel, they were very, very important I cannot I would not be the same person without having had these important experiences, it is very different to have a class in which you know what the answer is, you have from A to B, you know where you have to go, compared to a situation in which you have a much more unstructured problem in which one has to use the head to know how it is unlocked, how it tries different things, to be able to achieve something that may not be defined at the moment and it is defined as you work on it; so this job is much more like what you would have in the work field and it gives you an experience that cannot really be obtained in an academic field Great

When you finished university you went to work at Google and you were there for almost 4 years, what led you to this company, what attracted you to it? I had always wanted to be an engineer and at that time Google was the ultimate in engineering, what they were doing in the field of internet search, email, voice, YouTube, all those big data things, it was really the place to be in and to learn from the greats in everything that is computing and large-scale engineering, so for me it was the goal to get there and be able to have the opportunity to at least contribute a little bit towards that great organization Of the projects you worked on, which ones are you most proud of being a part of? Particularly, I worked on Android and in particular on the phone unlocking system, when you take your phone and enter the numbers to unlock it; when I got to Android it was obviously already implemented but the security was a bit questionable, it was not the safest thing, and with the help of the security team I implemented the new backend of the blocking system that connects completely with the entire system of secure keys of the operating system and allows a very high level of security so that anyone cannot enter and process a thousand codes and unlock the phone and get information that could be personal Super cool At what point did you realize or began to feel that Google was no longer the place for you that you started looking for new job opportunities? Yes, there came a point after this project, I felt that I wanted to have less structured experiences, a little more impact, Google is a great, great company, but sometimes there are so many people and they have so much time to mature that it is very difficult to get there and have a great impact, especially if you are a young engineer, so I just wanted to go out and have a more intense work experience, so I chose a Startup with fewer people in which I could have a little more field to express myself, to find new things to learn and to implement Great, so at this point, we are more or less in 2016 and that is where you decide to join Zoox, at that time it was a Startup that was quite unknown that was in stealth mode and whose mission remains the same, to revolutionize shared mobility through the creation and development of autonomous vehicle technology and a fleet of robot-taxis Tell us a bit about Zoox and what led you to join it Yes, as you said, when I joined in 2016 Zoox was a very small company, and what fascinated me about Zoox was that they had the desire to confront the greats, to Waymo, the greats at that time of autonomous mobility, and what particularly fascinated me about Zoox was that it was designing autonomous mobility from scratch, completely, from the ground to the sensors, they were trying to reinvent what the car means, so Zoox took Zoox saw and still sees, that the world is about to change when the combustion engine was created, which today is used in all cars, we did not take a float for example and put an engine in it, we redesigned the car, we put a steering wheel on it, chairs; It is something else, so what Zoox thinks is that we are going to have the same evolution with autonomous vehicles,

and what is cool about Zoox is that it is a company with a design culture, it is design oriented, it was created by a designer, and everything we do has quality and design from scratch And it is also a team, it was and it is a very intelligent engineering team of which I have the honor of being a part of, but it is the best team I have ever worked with Another thing that fascinated me about Zoox are its values, the fundamental value is the safety of our passengers and we are always asking ourselves that question and it gives me great pride to know that I work in a company that above all, about anything we could think of, safety is our number one priority, and that is what autonomy and autonomous mobility should be, we want to reduce the number of accidents that are happening on the street and save lives, and that is why that fundamental value that marks the way in which we do everything day by day, I am proud to work with that And finally, as I told you, Zoox was a startup at the beginning and it still has that DNA and they empower us to have an impact on solutions and in Zoox I feel that I am always capable to have a great impact, to have great projects and I feel very motivated and excited to be able to change the way that human beings are going to move in the next 50, 100 years Very cool that you feel that way about the company where you work How about we delve into the technology a bit and I think a good place to start is to understand the different levels of vehicle autonomy In this first photo, we are going to have different levels of autonomy Andres, can you guide us through these levels and explain what they are and what are the differences between each of them? Sure, Daniel As you can see here there are 6 levels of vehicle automation Levels from 0 to 3 still require a driver, but as you go up levels, less is required of the driver At levels 4 and 5 we no longer have a driver Let’s see a little more in detail what they are Level 0 is without automation, it is the car that you have in your garage today that you just turn on and drive The first level is a bit of driver assistance, that is, longitudinal speed control systems but nothing lateral, that is, the car will not change lanes, for example The second one is partial automation This helps us to park, to change lanes, to stay in the lane and those are systems that today are you can get in cars that you buy today The third is conditional automation The driver is still necessary at level 3 but does not have to be so aware of the surroundings, simply at very specific times that the driver will have to take control of the vehicle And level 4 is already when we start to have real automation and we don’t need a driver, so as you can see in 4 and 5 the driver is like a “ghost” and that’s because there is no driver At level 4 we can expect that the vehicle can move completely autonomously without having interventions in a place, in a predetermined ground, and in predetermined conditions, but completely autonomous, in these conditions it does not need the steering wheel or anything to move Finally, we reach level 5 in which the vehicle is capable of, under all conditions, rain, snow, any ground, it will be able to drive itself and be fully autonomous and that is what we are looking to achieve Andres, at this moment, where is the industry in terms of these levels? Right now I would say between 3 and 4 Perfect

Andres, what does it take to reach those levels 3, 4, and 5? Good question, Daniel Several things are needed If you have a normal car, the first thing you will need for the car to move by itself, obviously, are the sensors We need a system of sensors to be able to see around the vehicle, to be able to understand what is there and thus navigate and move around in a safe way So, let’s talk a little bit about sensors Perfect The sensors in addition to the sensors, what other things do you need? Through the sensors we also created maps to be able to locate them, they are high-end maps, high quality, in 3D, that allows the vehicle to be located in the world, so what we do is that we drive around a city or in a field in the that we are going to be and we use the sensors, especially Lidar, which we will talk about in a while; to create a map of very high resolution in 3 dimensions and that, after having it, we put it on the car, and when the car goes through that place again you can see what is Lidar saying, where we are and what points we are seeing, and we can associate them with the map that we had made before and thus have a very high resolution of the location to know, with an error of very few centimeters, we can know where we are exactly in the world Okay, let’s talk about those sensors now In this photo, we are going to see all the sensors that are inside the Zoox vehicle Andres, can you guide us through them? Sure, let’s start to the right First of all, we have a radar Radar is a system, it is a technology that was developed in England, I believe in World War II, and it uses sound waves to detect what is around, that is, the sound waves bounce, they return to the radar, the radar can measure how long these sound waves take to go, we obviously know what the speed of sound is in the air and through that, we can have a very good sense of where things are, how big they are, and how fast they are going That is the radar, it is entirely through sound, and it is important because it can see at night, it does not care if it’s raining, it does not care if it’s snowing, so it is of low resolution because we are obviously not seeing people’s faces and exactly what the vehicle looks like and what are the geometries but, we can see more or less a amorphous entity and we can with very high resolution know what the speed is and above all, we can do it under any conditions Now we move on to Lidar Lidar is super interesting and the truth is it’s not known by many people but it is practically, as its name says, it is a radar but with lasers, it is a super crazy expression, it is a laser that is spinning super fast and shooting lasers thousands of times per second and the radar does it by measuring how long it takes each laser to return to the sensor and through that, we can create a depth map, that is, with each point, each time we throw a laser we can create a point in space in 3 dimensions that tell us exactly what objects are around us It is a new system and it is super cool and it has a very high resolution because the radar, the lasers have columnized light so they can with very high resolution shoot a point of light and it bounces and does not scatter, so we know with very high precision exactly that the point traveled and returned directly That is Lidar What advantages does it have? Obviously, it can also see at night, very high resolution, but unfortunately, it bounces on the rain, if there is snow, it also bounces on it, it has reflections, the Lidar is a super important tool but it is not all, because obviously, we need a little more of we need to be able to see, for example, if there is fog, we need to be able to see through that fog and then there come the cameras The cameras are the third and very important sensor that we use, and they are cameras like the ones you have on your phone,

a very high resolution camera and it obviously has color, a very high resolution, we can see faces, we can see vehicles, all kinds of things, but unfortunately at night it looks different, it needs daylight to see and also when it rains, water drops can fall on the sensor, things like that The camera has a very high resolution but it has problems with the weather conditions, it can make it not work as well as we would like, so that’s why we need those 3 sensors, with them we can cover much more, first of all, in different meteorological conditions, and also have redundancy which is very important, because in any system in which I am going to have my mother, my children, I want them to have redundancy, I want that, if the radar does not see it, the cameras see it, Lidar sees it, that is why it is very important and that is why the system that Zoox is creating is high-end because we are using several sensors to join them and have a complete full vision of the world around us Now to the left GPS GPS is very important, it is a location system With the GPS and the maps we created, we can, at very high resolution, know exactly where we are in the world, and that is very important, because if you don’t know where you are, then you don’t know where to stop, for example at a stop, or you don’t know where to stop at a traffic light Very important to have a very high resolution, where the vehicle is located And finally, we go to the speaker We also have to listen, as human beings we are listening at all times, if there is a police officer who comes behind us, an emergency siren, this allows us to understand the world in a different way; there are things which sometimes we need the speaker to listen and also to communicate with the world What the speaker does is, for example, if there is a person who passes us in front of us but suddenly stops and says: Oops! I don’t know if this robot will let us keep going, we can send it a message on the speaker: “It’s ok, go on”; we stop and then we keep going It is a system for the robot to communicate with the outside world Wow, thank you very much for guiding us through all the sensors, we are already having an idea of how all this works, it is super interesting how that computer receives all this data and processes it in real-time so that the vehicle can navigate the streets; that was one of the things that attracted me to Zoox as well Well, Andres, we were also talking about the mapping process Once we have the photo, can you describe what we are seeing and what is the process of creating these maps? Of course, this is one of my favorite photos This street is from San Francisco, Lombard Street, it is a very famous street and we drive it every day at Zoox, and it is all you see here, it is a 3-dimensional map that we have created with our Lidar system All these are points that have bounced and returned, that have re-entered the sensor and we can know exactly a position in 3 dimensions, and as you can see the resolution is very high almost, we can see all the windows, practically all the leaves in the trees, it is very high resolution and with this, we can locate ourselves in a very, very precise way, much more than what the GPS could give us Great Now we have an understanding of the sensors and about the maps, we have the first two things that we need so that the vehicles can move in an autonomous way; the third is to understand the software and how it works; so, can you explain to us what are the software systems that make the vehicle to since it has all these hardware components, how do they get to the computer and what does the computer do with all this information? Of course Well, as you say, we already have an understanding of the sensors that we use to see the world around us and the redundancy that those sensors provide us Now, we are going to talk a bit about the software and how we use those sensors to create a safety plan around the world and get our users to their destination in a safe way The first section that we are going to cover is

the perception system The perception system uses the sensors to ask a question: what is around me, how fast is it going and what is it? That is, that is a vehicle, that is a cyclist, that is a pedestrian, and they are in this position in 3 dimensions in the world and this is a box that defines their position in the world and at what speed they are going, and where they are aiming I mean, you can basically imagine the eyes of a human being that are saying, simply the visual system, that they are saying: hey, that’s a person, that’s a dog, a cat, a mother with her child, all those things are part of the perception of the visual system of the vehicle and allow us to understand in the first place what is around us The second section is my team, it is where I work, and it is the prediction system After we have answered the question of what is around me, where it is, and how fast it’s going, we have to answer the question of where is it going to go And that, as you said at the beginning of this talk, is literally predicting the future in 8-second periods So, with each object that the perception system brings out, our job is to simultaneously create several steps through which that object, that actor in the scene, will pass or can pass That is a vehicle, for example, if you are coming to an intersection, you can go left, right, or straight A pedestrian can cross the street, a vehicle for example can be parked, but is it in double parking, is it blocking, or is it just waiting for the traffic light? These are questions that as a human being is very easy to know, we are simply capable of living in a world without structure and not worrying about these little things that seem so simple but for a computer, it is quite difficult to understand such an unstructured world and how things are they will move Finally, after having the perception and prediction system, we can go to planning and controlling the vehicle With all that prediction and perception say, the planning system uses very advanced algorithms to carry out a safe passage around all obstacles and where they are going to be, so at every moment we are thinking: where are all those things going? Do we have to stop? Are we going to the other lane? Are we heading to the right, to the left? Do we have to stop here for a moment for this person to pass? All those things are, at every moment, the planning system is deciding how to react to these perceptions and predictions And finally, there is the control system that is two things, the angle of the steering wheel and the acceleration, those are the two things you need to drive autonomously Easy right? Yes, you make it sound easy enough One question I have for you is, what if the robot does not know what to do or does not know how to steer in a certain situation? If the robot knows how to drive itself in a certain situation, we have a very advanced teleoperation system, which allows a human being to get into the robot’s camera and say: Don’t worry, this situation can be overcome in this way But, it should be noted that the robot is still in control, that is, teleoperation simply gives it a safe step around any blockage that suddenly we cannot understand but the robot is obviously still reacting, if there is someone who comes out of nowhere behind a car or for example, if a cyclist comes the robot can still respond and stop safely, then it is really the union between the human being and the robot to be able to navigate in unstructured situations Great, thank you so much for giving us that answer Andres, let’s talk about your team As we have said several times, you’re part of the prediction team In your team, what tools or technologies do you use to create the code that teaches or that allows the vehicle to do what it has to do?

First of all, we use is big data, that is, the thousands and thousands of hours we have of data in which we have handled in San Francisco, in Las Vegas, those thousands of hours are processed and we use artificial intelligence systems to create correlations between the data we have seen and the objectives we have Things that we use are, as a language we use C++, Python, for artificial intelligence models we use TensorFlow, but anything could be used, tools do not matter, but the algorithms that you apply, and that is why this career is so cool because it doesn’t matter the language you know or the system you know, what matters is the data and knowing how to create those correlations to be able to predict the future Great So, and you enter your code, through that code you tell the vehicle what to do, but I think that there are several steps to validate that code and that code cannot go directly from your computer to the vehicle, can you explain to us how do you validate the code in real-time? Of course, as I told you, Zoox’s fundamental value is security, everything we do we validate rigorously In the first instance, we have testing for all our code, to every code to enters the system, we have to do very, very rigorous unit tests After that first level that is like the simplest thing, what any programmer should be doing, no matter what programming career The second is the simulation Zoox has a virtual simulation system in which we can take situations which we have seen in the real world and make a new version of the software and run it in a virtual version of the car and make sure that everything still works as we think or that things improve, obviously, we always have to improve So in this virtual system, we have thousands and thousands of scenarios hat we see and that that we can, every time we make a change, we run all those scenarios with the new software and we make sure that everything is fine Finally, after rigorous validation of unit tests, simulation in thousands of scenarios, we move on to vehicles in the real world, we have drivers who are trained, they are very professional, and they basically take your version of the software, put it in the vehicle and they know exactly how everything works and they tell you what worked and what didn’t, and thus, through all those 3 steps we can ensure that what we are doing is safe and that it is improving our ability to move around the world Andres, how did you feel the first time you sat in an autonomous vehicle and saw your code work in real-time? I can’t It’s very exciting to see that the things you do that interact with the real world is yes, it’s super exciting, I can’t put it into words, it’s a very cool feeling to be able to impact the world in some way and be part of the world with your own code Great Well, we’ve already talked, I think the audience already has a good understanding of at least the basics of how the technology works How about we watch some videos of the robot in action and ypu describe what is happening? But, before watching the videos, I would also like to tell the audience, the people who are listening to us that if they have questions, please put them in the chat because after this section of videos we will be answering questions, so if you have any questions to ask Andres this is a good time to ask them I’m going to ask John Andres, we have prepared 4 different videos for you,

this is the first one of them Andres, can you explain to us what is happening on this screen, what we are seeing, can you describe it? Yes, of course Well, those are videos on YouTube that we have published in Zoox and as you can see, there are two panels, on the top we have the cameras, three of the cameras, we have more cameras in the vehicle obviously, but these are the cameras that are pointing forward, and as you can see, the little boxes that are painted, the pink ones and the blue ones are what the perception system is saying, where things are and how big they are And as you can see we are projecting those points, those 3D boxes that are saying exactly what the dimensions of things are and we are projecting them on a 2-dimensional level so that they can be seen in this photo And at the bottom, we have the virtual world, the platonic world in a way, which is what the vehicle is seeing, so all those boxes are projected down here in our version of the world map, and it is exactly what the vehicle is seeing, and as you can see there is a passage, there is a green mat, that is where the vehicle can pass and when you see those red doors you are saying that the vehicle is going to stop there In this particular situation, since we have an idea of what we are seeing here, and all these people and things that we are following at all times what is going to happen here is very interesting, and I want to stand out some things First of all, it is at night, as I told you, the cameras are different at night and during the day, things look different in color, so it is very important that the system works in both situations, and it is not obvious that this happens, that requires planning and requires training Secondly, I want to emphasize that we are passing through a construction zone The world is dynamic, it is not structured, we cannot expect that the streets that we have mapped will always be in the same way, there will be blockages, there will be obstructions, and what is super cool here at this moment is that it is a zone of construction which we have never seen before and we are going through it safely As a human being, you see this, you see the people with their jackets saying: “Keep going” for a human being it is easy, no problem, you can go through there, but we have to teach a robot what this means, and that is what is super cool about this video, it is in a way, a moment with little structure that we are navigating in a very interesting and very safe way So if we play the video we can see what is happening Here as you can see we are following all these people in real-time, we are passing Huh! Did you see it? We see that it is a construction zone, very carefully we pass, we continue and there are pedestrians everywhere and we are very carefully passing everything, and we have already reached the other side For a human being, it is quite easy to do that but for a robot, we have to be very aware of everything and it is an area without structure, so for robots it is complicated and I am very proud that we can do this In these construction areas what you said is completely correct, in them, you have no control of when they will occur or where they will occur, only if the street is damaged can there be a construction there In those moments of construction, are there also moments that are discussed with TeleOps, the team that you have been commenting on previously? It could be but we can also detect it, the idea is not to have teleoperation if we don’t need it but yes, it could be Perfect In these cases in which the vehicle is in those moments in which there are things that it does not understand or there are changes in the map, does the vehicle tell the system that the map must be updated to take into account this change of what happened on the street? Yes, we update the maps every day, since we are driving we always have the ability to update our maps, things change and we have to be able to respond to that We have a question before going to the next video,

Juan Pablo Rodriguez asks us: how much more visibility does an autonomous vehicle has compered to a human? A very good question An autonomous vehicle in many ways has more visibility than a human being because it has more sensors, it has the radar, it has the Lidar, and it has the eyes, so obviously as the human being cannot see behind the walls for example, but we have visibility of 200 meters around the vehicle and we have various types of sensors that allow us to see in situations that a human being could not see, if there is a lot of light or little light, the eyes of the human being The eyes are incredible obviously but the 3 modes allow us to have a little more redundancy Perfect Let’s go to this second video, here we will see that the vehicle will be in the rain Andres, can you give us a description of what we’re going to see here? Yes, as I was saying, the weather conditions are complicated For example, in the rain, we have to worry because the camera can have drops of water and that can create a distortion in the video and can create problems for our perception system, so we have to be very careful with that and train that Also Here we have a very dynamic situation, we are going to find a vehicle that is in double parking, it’s stopped on the street and it’s not going to move; It is parked there, and who knows, the person is inside doing something, who knows, then it is parked, but we have to be able to detect that, because a car that is parked waiting for the traffic light or for a pedestrian to pass, obviously we do not want to pass that person because it would not be safe and it is not the way that anyone should drive What we are doing here is that we detect that the car is double-parked and that is why you can see there that we have it marked in yellow and the vehicle stops safely and when it has the ability to pass safely to the left lane, it does, and all in the rain, which is super cool As you can see here passing in the rain, and here we see the vehicle, we see the red door and we stop, and we let that vehicle pass and we enter very safely to the right lane, and we pass that pedestrian that we thought was not going to pass, but we say no, that he will give us the way, and we keep going Super cool Andres, we have another question from the chat and they ask us: In the future, do you think it will be forbidden for humans to drive vehicles? It is a very good question, the truth is that I cannot predict the future for more than 8 seconds but I don’t think so, I believe that people will always drive their vehicles, it is something that gives a lot of peace, a lot of joy to people and I believe that at some point we will reach the point where most of the vehicles will be autonomous, and with that, we will greatly reduce road deaths, but I don’t think we’ll get to the point where there won’t be cars or that will be illegal to drive Obviously, that’s my perspective, it’s not worth much, but it’s just my perspective Okay. Let’s move on to the third video Here in this video what we are going to see is the vehicle going at high speeds, on highways, and on expressways, can you tell us what we are going to see or what is the difficulty of driving on highways? Highways are complicated because the speed at which the vehicle has to respond has to be much higher, from the moment we see something in a sensor until the moment we respond to that, it has to be very minimal Why? Because the vehicles are going very fast, one second represents many meters of displacement; on a normal street that is not like that, at a low speed we do not have to have such a low reaction time In the second instance, what is difficult about the roads is that, at the moment when we have to have a very low reaction time,

we also have to see much further, because the vehicles, about 100 meters away, are going to interact with us; on a street, in the city, someone 100 meters away it doesn’t matter, we won’t have to interact in a short time with that actor, but on the road, we do have to see far away and understand everything that is happening much faster In this video what we are going to see is that the vehicle will want to take an exit to the right in a very short time, and here there are two highways that are intersecting and then there will be a lot of traffic on the right, and then we have to decide how we’re going to get into that lane and ultimately get off the road safely Great As you can see here, we are waiting for the moment when we can get into that lane and that’s where the point came And we are happy in our lane so we can take the exit that leads us to the Zoox offices in Foster City, California Andres, we have a question from Eric Coleman, who asks: How do you handle predictives at high speeds, is there a modification of the prediction window? Yes, we have, as I said, obviously we have to predict things much faster and at a great distance, but Zoox’s software is the same at high or low speeds, our predictability does not change when we are on the roads The system simply works the way it has to work on the highway and on city streets Great This is the last video we will watch, in this video we will see the vehicle driving in a different city, all the previous videos that we have seen are of the vehicle driving in San Francisco, right now we are going to see the vehicle driving in Las Vegas Before we start the video, what is necessary to do to enter a new geography for the vehicle, so that it understands what is happening in this new surroundings? Well, we are very proud to know that we use the same software but there are always things to do, there are geometries that are different, traffic lights that are different, we have to incorporate the data that we see in a new geography, but in general, things move very well from location to location What are we going to see in this video? What we are going to see is obviously this new place, Las Vegas Strip, and here we also have to move to the left, so there is a lot of traffic and we have to do it safely, so we are going to wait for all the blockages to pass and so that we do not get into the left lane unsafely and we are going to do it, very conservatively but at the same time maintaining progress As you can see here we get there and we obviously predict that the vehicle behind will allow us to pass and we get into the left lane and we are going to take the left here, and we obviously do it very smoothly and safely We have a question from Carlos Matus How do you predict cars in the opposite direction, for example on hills or mountains with a lot of curves? The maps help us, if there are hills or curves, the maps allow us to understand what the curves of the roads are and thus use those curves of the maps to be able to predict exactly what the trajectory of the vehicles will be on those roads

Well, I think we no longer have time for any more questions, so now I can only thank you for your time, for giving us this space, for educating us about this technology and for telling us a little about your story Thank you very much, Andres Of course, with great pleasure, I hope you go to Zoox.com and see what we are doing and if you are interested, apply, and I would love to see more Latinos in autonomous vehicles and until we work together, thank you very much for having me and have a good weekend Thank you very much Oscar, I am passing it to you so that you can take us to the conclusion of this call Andres, Daniel, thank you very much for this great content There is no doubt that there are still many things to learn, I would like, probably in the future, to invite Andres again, let’s wait a little time so that the technology is better prepared and see how it evolves from how we saw it today, to how we will see it in a few months or maybe in a couple of years It would be great to have you again, Andres, and thank you very much, I want to thank you on behalf of the entire Terminal team for the time you dedicated to this talk, because not only was it now, but we had previous preparation talks, for that and for all of this, we thank you very much For the audience, I hope everyone enjoyed it, that you learned a lot, and above all, that it inspires you for your next projects I don’t want to say goodbye without first reminding you of 3 things Number one: Invite you to go to: Terminal.io/Openings and review the vacancies we have for Mexico and Colombia, and soon, thanks to Daniel, in more Latin American countries Number two: For the curious ones, we have a short question and answer session with Claudia, Ruben, and Daniel, who are part of our Talent team There, we will be able to see some of the vacancies that we have as well and you can ask some questions about Terminal, our culture, the benefits, the recruitment process, whatever you want We will try to respond all of that, I’ll end up here and I’m going there too It will be through Zoom, you can already see it in the link on the YouTube comments, it is a Bitly, you can see it in the last comment of Terminal, and I would love to see you there and talk, and to respond to any of your doubts, it is not necessary that you are looking for a job Number 3: follow us on Instagram please, one of the invitation emails said that we were going to give away a one-year account of Platzi, tomorrow afternoon I am going live through Instagram to close the week, giving it transparency; It will be a raffle, a random dynamic, that I am going to transmit on Instagram and there I would like to see you; follow us on Instagram.com/JoinTerminal I repeat: Instagram.com/JoinTerminal and please enter the link, to the Bitly, the bit.ly/TerminalQA1020, there I would like to see you right now so that we can talk, now, let’s have the after-party and I only want to thank you and express my gratitude again to the entire production team, to Daniel and Andres, and to you for joining us Thank you very much, goodbye