Andrew Thomas |
Welcome to the podcast today, I’ll be your host for this episode. My name is Andrew Thomas. I’ve been in the localization industry for about 25 years now. I’m currently a senior product marketer for RWS and I have the great pleasure to be talking about a fascinating topic today with our guest Vangelis. Without further ado, I’d like for him to introduce himself and give us his background, and then we’ll dive right into the fascinating discussion around metaverse and cosmo-localization and what it all means. |
Vangelis Lympouridis |
Absolutely. Thank you very much for having me here today. My name is Vangelis Lympouridis and I am a trans academic. I follow the types of innovation inside and outside academia. Since 2011, I’ve been associated with USC, University of Southern California, both the school of cinematic arts and the Viterbi School of Engineering. I’ve been teaching AR, VR and mixed reality technologies there, but also I have a visiting position at Arizona State University, but most of my time in professional work is with my company, Enosis, which offers what I call innovation by design, a framework for innovation that is based on design thinking, design principles and understanding context and value in order to apply the technologies needed. |
Vangelis Lympouridis |
And given my eclectic background, I operate in many contexts from healthcare to automotive architecture. Every context has to earn, by this a translation, transcription of methods and knowhow from one to another. And in parallel, I have a PhD in Whole Body Interaction, so a very good understanding of how the body is and can become the interface of digital technologies that helps a lot into understanding the new frontiers of technology and interaction and how everything comes together to serve human needs. |
Andrew Thomas |
First, I’d just like to thank you for joining the Globally Speaking podcast. Happy to have you here, especially since you are talking about a very interesting and very new, trendy concept and topic for the industry that I think probably a lot of our listeners are not even entirely familiar with. So if you don’t mind just kicking off, please introduce the concepts of the metaverse and all of these other kind of new emerging technologies, and how they impact our industry. |
Vangelis Lympouridis |
Perfect. Thank you so much, Andrew. It’s a very interesting and complex topic and I’ll do my best to clarify some notions and frame it in a way that the localization industry and the experts in the field can embrace some of these concepts and metabolize them and let them grow in their own thought practices. As a preface, our journey started last September with RWS group, and we started this thought development and thought leadership around how language can represent worlds and vice versa. And what is the future of localization? How is spatial computing, other technologies, the metaverse can save and create new value and new practices and new services for the localization industry? So that evolved into partnership with RWS group, which were both parts are very excited about, and we are working on developing the roadmap for the future of localization in spatial computing. And what is the future of our reality? |
Vangelis Lympouridis |
So one of the goals is to think about localization 10 years ahead, but be very pragmatic, not only about the use cases at that level, but what are the immediate steps to risk there and how these will potentially develop. So there’s a lot of structure behind all these explorations we’re doing with Elsa and the rest of the RWS group. So to begin, I would like to frame a little bit and clarify what are the core technologies we’re talking about, how augmented reality and virtual reality, mixed reality differ and give some introduction to the core technologies that inform what we call the metaverse. |
Vangelis Lympouridis |
So an umbrella term to understand augmented virtual and mixed reality is extended reality. So when you hear extended reality XR or immersive technologies, we’re talking about the same thing. Augmented and virtual reality are in essence use cases of immersive media experiences and the core notion here is the immersiveness. You are surrounded by your experience in a new make believe. Augmented reality is a simple interface with a world where you can think about what the rumors are about these augmented reality glasses that will come from Apple, from Snap and other companies in the space, a single pair of glasses that can overlay information. So, augmented reality is about overlaying information in your visual queue, within the living environment. Virtual reality, you’re fully occluded, you wear a device- |
Andrew Thomas |
The big headsets. |
Vangelis Lympouridis |
Head worn devices to clarify from other types of virtual reality. And you’re fully occluded. The technology takes over of your visual and auditory cues. Sometimes you engage with other senses, but everything is synthetic, you fully have experience in the virtual world. And then mixed reality is a very interesting concept and concept I’ll try to separate a little bit from augmented reality. You still operate within the physical environment with augmentations. |
Andrew Thomas |
Ah, I see. |
Vangelis Lympouridis |
But these are interactive there stunt context. So it’s not just the information that I see something- |
Andrew Thomas |
It’s not just an overlay. It’s something that you can actually interact with. |
Vangelis Lympouridis |
Correct. Correct. And you can have photograms and you can have all kinds of rich interactions between the content, the information and the multimodal interactions of the user. |
Andrew Thomas |
So, just so I can give like a real, world example, and you could tell me if I’m right or wrong here. So, augmented reality is I’m wearing the latest Apple glasses and I look in a particular area and there is some sort of informational overlay based on what I’m looking at. Mixed reality is in addition to that informational overlay, there may be some sort of displayed user interface thing that I can actually take my hands and interact with. And that has an impact and changes the experience somehow. |
Vangelis Lympouridis |
Correct. And also entities, holograms, things that can’t have behaviors of themselves, not just- |
Andrew Thomas |
Oh, right. It’s not passive. Yeah. Yeah. I understand what you’re saying. Yes. Almost like something out of Star Wars where you have the holographic displays of people. |
Vangelis Lympouridis |
Perfect. So let’s talk a little bit about spatial computing. Spatial computing is a term that we use in computer science to really embrace the new infrastructure of what we call industry 4.0, effected down the road industry 5.0. So we’re talking about infrastructural technologies that supports the thesis of a three dimensional, full dimensional web where you are in the physical environment and the physical layer with the data layer come in one to one correlation. So we’re talking about infrastructures of 5G, 6G and plus, the digital twins, internet of things, edge computing and all that. These are infrastructural technologies that come together and along with the immersive media, which are the interfaces, the VR mixed reality devices, headsets and all that constitute what we call spatial computing. So spatial computing is the core. |
Andrew Thomas |
Yeah. And again, just to, so literally meaning spatial, meaning that it’s the computing framework necessary because you’re going to generate a lot more data, so you need a lot more bandwidth. You also need more sensors with the internet of things, like lots of more input because you’re interacting. |
Vangelis Lympouridis |
Correct. |
Andrew Thomas |
So all of those kind of foundational pieces put together allows computation around space. Like basically whether it’s virtual space or real or overlaid on top of some real space. Right? |
Vangelis Lympouridis |
Absolutely. Yeah. Across this lines where we’re talking about digital twins, it’s about creating a digital replica of a physical object that can be as big as a building or as small as a, I don’t know, a screwdriver. |
Andrew Thomas |
Yeah, even now I go on most major shopping apps allow you to virtually display a piece of furniture in your room to see how much room it takes up in the room. Right. That’s a great example of, I guess that would be augmented reality in that situation. |
Vangelis Lympouridis |
Absolutely. |
Vangelis Lympouridis |
And recently we have the notion of the metaverse, which is, let’s call it a marketing term. It’s more the popularized term for all of the above that we discussed, the core technology, we start talking about spatial computing and about all this technical terms, it will be hard for the public to embrace, under a popular term of the metaverse as it was introduced. And everything becomes a little bit more tangible in a way at place in terms of the terminology, but creates a lot of confusion about what we’re talking about. |
Andrew Thomas |
Well, I can remember, even as a kid, I read Snow Crash when it came out. And clearly that was the first place where the term was invented and he was painting that picture, the author was painting that picture, which clearly is not how things are necessarily going to develop. But it’s interesting that was, I don’t know how many years ago, decades. Right. So it’s fun to see that we’re at least bumping up against the beginning of what was written about back then in science fiction has now become science reality in a way- |
Vangelis Lympouridis |
In a way. |
Andrew Thomas |
Right. We’re not fully there, but yeah, we’re moving in direction. |
Vangelis Lympouridis |
It’s real good data. We’re going to go there necessarily. There’s not one notion of the future. Future is plural, it’s futures. That’s why you cannot predict it. That’s why we engage in all this exercises in foresight and world building to see what is possible, how to get there. But the important thing is that you cannot stop technological advancement. And within that people find use cases based on real value. So whatever it doesn’t reflect real value, won’t get adopted. |
Andrew Thomas |
I think that’s a really great point, right. I want to hone in on that just a little bit, because that’s the biggest shift that I see currently and how people are talking about it because I’m a personal gamer. I have a gaming background. I got into the loc industry through doing localization of video games many, many, many years ago. So this was something that’s always been kicking around virtual reality, augmented reality. And it’s gone through several waves of attempts and people adopting certain things, but it strikes me that this current wave feels more commercially driven and feels like it’s more focused on bringing true value to existing for a reason rather than just because it’s cool. And I’d love for you to dig into that a little bit more about maybe what are some good, real value use cases that you see either already emerging or soon to emerge? |
Vangelis Lympouridis |
Absolutely. I’ll be happy to do that. For the moment, let’s separate commercial for enterprise uses from consumer users. |
Andrew Thomas |
Sure. Absolutely. |
Vangelis Lympouridis |
Consumer will come later and with a question mark at the moment. But in enterprise, what we’re talking about first and foremost, we’re talking about training, extremely high value in training because there is a lot of dematerialization of the trainings, scalability, the way you form memories in virtual reality and mixed reality in particular are embodied the form, the gesture, the spatial coordination, what the task is all about. You can simulate in a great degree, and then from there repeated as many times to receive the embodied learning. And as part of that, the simulation and data visualization offers tremendous opportunities for reaching a level where you can simulate very complex operations and see how they respond into all kinds of queues before you even build anything, enhance communication. And we see these technology already adopted with no way that the industry will roll back to the previous state of the art in industries, like the automotive, for example, where to design a car and really go to the specifics of feel and look, and especially for the interior and all that will take endless amounts of time, effort, material. |
Vangelis Lympouridis |
And now it’s all dematerialized. In real time, a designer can develop something and an executive can evaluate it and give feedback. And this loop can be as efficient as real time and connect with other parts of the industry where the prototype is being developed on the fly and all kinds of things. Similarly, in architecture, an architect can design a client can really walk inside the virtual environment and give feedback. And this feedback loop again, is in real time and all that. Civil engineering, we are seeing evidence of companies adopting augmented reality, so on the construction side, you can visualize and proceed different layers of information of how the building is going to be built. And again, these feedback loops over creating suggestions and corrections over cards and all that is something that will see a lot of traction. |
Andrew Thomas |
I think somebody might be asking why is that an improvement or why is it better? And I think the one thing I want to point out to folks is our brains are designed already to think spatially, right? We evolved as people that process information spatially. And so to your point, seeing a blueprint on a piece of paper of what this building is going to look like versus being able to walk inside the building and turn your head around the way that you would when you’re inside the building and seeing how the light’s going to fall when it’s in the middle of the day versus at the morning or the end of the day, and you can actually see exactly what it’s going to be like. That gives you a much more immediate experience and drives people to make decisions quicker and more effectively, and hopefully avoid going down a bad design direction and then winding up with a complete prototype and then realizing after the fact, oh, we need to rethink everything that we did because we didn’t think about this real world scenario. Right. |
Vangelis Lympouridis |
Absolutely. These devices would not, and should not be seen just as visual displays. They’re far reaching, they have a lot of sensors on them. So I’ll give you an excellent example. There was this company here in the United States that were making staircases, interior staircases, and they adopted the HoloLens as R&D and soon realized that the personnel can walk into a house, use the HoloLens to actually create a 3D model of the staircase with absolute dimensions and right on the fly, overlay different solutions to the client and on approval at the back end, the manufacturing facility will start cutting and creating the materials in the schematics and all that to build this particular staircase. It’s an excellent combination of what we’re talking about in terms of the new economy and the new ecology of industry 4.0, because you’re using the onboard, the essential board to measure, to calculate, to create digital twins, and then also display. |
Andrew Thomas |
It’s like, you’re getting a more efficient supply chain in that scenario basically. Because you’re building on demand to the exact specifications because the requester now has the tools to be able to give you that information immediately versus in the old days, you’d have to have somebody come in and do all the measurements for you. The immediacy of it is really impressive, and kind of amazing. |
Vangelis Lympouridis |
Yeah. And it has a term that we’ve been using for a while experience economy. How the experience economy could be manifested outside these technologies, really how probably to tackle. But if you have experiential technologies and immersive technologies, suddenly experience economy science and the transition is far more plausible because we know how this could potentially manifest itself. A brilliant example is what Nike has been doing with the manufacturing pipeline, where you can customize your sneaker and the manufacturer facility will produce this personalized sneaker. So when you go into the metaverse in order to customize it, the fun and play of customization actually translates in a whole production pipeline that is already developed and defined to do this personalization. |
Andrew Thomas |
It’s like a mass produced customization, which is, in the past, those two things have always been in conflict. You can do mass production, or you can do really customized. And now you get both, and that’s really amazing. |
Vangelis Lympouridis |
And the interface around it and the spatial computing coordination between that. |
Andrew Thomas |
Yeah, I mean, it’s mind boggling. And I think, so I think we’ve set the scene as to what we’re really talking about, even though I’d love to even go down the path of, we talked about kind of a lot of the beginning of that supply chain and ideation and creation of materials. But I think there’s lots of avenues down, as you mentioned earlier, the training and learning aspects of how to use a brand new product, or if you are in the service and repair industry, being able to service an automobile or an airplane and do that in a guided way, that’s using taking advantage of spatial computing and augmented reality and all those things. There’s lots of interesting use cases here that we could, I’m sure talk about all day long. |
Andrew Thomas |
For the focus of this podcast, now I’d like to turn it back to localization and ask you, okay, so what, how does this now impact our industry and localization in general on how we approach this new world? |
Vangelis Lympouridis |
So what that means for content and content localization, first of all, it’s an expansion of content and content classification. That includes now 3D models, textures, materials, animations, rotations, and orientations. What are the spatial characteristics of content? So suddenly there’s an expansion in the categories, right? To include all these new content categories. |
Vangelis Lympouridis |
And then the digital twins and all these rich interactions with content that constitute your experience within this continuum of realities is very context specific. And you require situational awareness, that becomes key. So in contrast with any other technologies, spatial computing is experiential, and this experience is driven by worlds and language. And it’s very important to note here that all the speculative work that we’ve done in the past three months is actually coming out from companies like Facebook in a much faster pace than a dissipated. |
Vangelis Lympouridis |
Last week Meta announced that they plan to build an AI powered, universal speeds translator. Why they’re doing it? And what that means for the existing ecosystem? What they’re interested in is social presence and how social interactions getting into the metaverse and all that in the build a layer for automatic translation. As part of that, they also announced an AI driven engine that will create, build virtual worlds based on a voice description. |
Andrew Thomas |
Oh, wow. |
Vangelis Lympouridis |
So you have natural language processing, you semantically analyze it. And there’s a video with Mr. Zuckerberg using language to create the world he would like to be in. It’s a demo, but these illustrations are exactly what we have been exploring. And now they get accelerated. So it’s very interesting to see where all will go, but this early validation didn’t come exactly as a surprise. The surprise was how fast this prediction that Meta and other companies in the metaverse will jump into localization and automatic translation and all this context awareness. |
Andrew Thomas |
What about some of the cultural nuances that factor in when you are in a spatial scenario? You’re not just translating words on a page. You’re not even just translating or localizing an image or a video, you’re literally localizing, as you say, an experience of some sort, whether it’s AR, VR, XR or whatever. And I would imagine that there are some significant cultural differences when you go from country to country and culture to culture, where not every experience, even if you translate all of the words is actually going to deliver an appropriate experience for that particular culture. Right. So do you already have a sense of some of the non-word related issues that companies in the localization industry are going to face when they tackle translating say the business content of a virtual store in Meta’s meta world, for example. |
Vangelis Lympouridis |
You’re spot on. Absolutely. The way we are thinking about that is about transcreating an experience- |
Andrew Thomas |
I was thinking transcreation. It’s very similar to that, isn’t it? |
Vangelis Lympouridis |
Absolutely. So transcreation of words within context, meaning making is richer than language itself. And when we’re talking about real life experience, and when we’re talking about the interactions within the real life, the environment is taken in into account, and then you have gestures and you have all kinds of multimodal interactions that emerge in there that have cultural significance as you said. So from the space to the communication cues, to all kinds of situational, context awareness, the presentational space of the semantic analysis becoming really broad. And that’s where orchestration based on all the experience of the localization industry is where the value is. |
Andrew Thomas |
This is the value that we can bring you say, |
Vangelis Lympouridis |
You can bring, absolutely. |
Andrew Thomas |
Okay. |
Vangelis Lympouridis |
Absolutely. |
Andrew Thomas |
Immediately, I’m already thinking like site order, if you’re Western language, Latin-based language left to right top to bottom. That’s literally where your eyes are going to go when you’re trying to find information or process information and the direction that they’re going to go, which would be different for Arabic or any other languages that don’t follow the same site pattern. Right. So that would be something that I know it’s a kind of a simplistic example, but it’s the kind of thing that I would imagine we would as an industry bring to the table to help companies think about. |
Vangelis Lympouridis |
Yeah, absolutely. And as we said, despite the natural spoken language, gestures, eye tracking, expressivity, even biomarkers become relevant and of high importance, because if in the context of the future clues so far, your iWatch, your ring or your heart rate monitor, or whatever is an input data input in your physiology. And we can harvest that. So within interpersonal communications, this is a factor and already I’m measuring biomarkers if we want to extend it just a little bit further, because all of these augmented reality, mixed reality and virtual reality headsets are going to have embedded biosensors. |
Andrew Thomas |
Wow. |
Vangelis Lympouridis |
Starting for my tracking and going all the way to how trade and how |
Andrew Thomas |
So they could literally basically gauge, like they could easily gauge excitement, whether it’s because they’re getting angry or because they’re getting happy or something like that, they can say, this person’s heart rate has started to elevate. They’re having a conversation in the meta world with somebody else. Is it because they’re happy and excited? Or is it because they’re angry and frustrated, and then you could maybe use sentiment analysis in what they’re saying. And yeah, no, it is fascinating. It’s definitely interesting. |
Vangelis Lympouridis |
And we’ve seen great examples Hewlett-Packard, for example, created a headset that has a lot of embedded biosensors and the model they were going after was cognitive load. |
Vangelis Lympouridis |
So, and within the eye tracking, they had also a system to measure pupil dilation because this is static movement of your eyes. And suddenly these nuances, these biometric nuances become indicators of your cognitive load. And cognitive load is very important in assessing the interactions of the users and all that. So within the future context of localization, what we also call cosmo-localization, it’s very important to see the only directional pointers between all these modalities. So the environment is one, the biometrics, who are the agents? We might have autonomous agents, AI agents, human agents, robot agents. This is the promise of industry 4.0, not very far away in the next five to 10 years. We already have all the underlying technologies, what the metaverse promises of a singular layer, where everything can be visualized might take longer, because we’re talking about visualization, but in terms of data integration and how the analytics and the context awareness and all that is going to be manifested is already manifesting. |
Andrew Thomas |
It might be scary to some people to think that your biometrics are now potentially becoming data for companies to measure your level of engagement. As you’re saying, if you’re looking at pupil dilation, are they literally zoning out and they’re not really paying attention versus no, they’re really cued in to whatever it is that we’re doing here. And they’re sitting on the edge of their seats, listening to every word. And that’s the sort of thing that if I was in a live auditorium, giving a speech, I could have a sense of that when I’m talking face to face with people and kind of reading their energy and their body language. But if I just go to a website and watch a video, there’s no way for a company to kind of measure that. And it sounds like what you’re suggesting is whether or not we would be able to report and visualize that data, we are able to gather that data in the very near future. For folks that might be a little concerned about that, do you have any thoughts on the built in privacy and security concerns and how companies are going to address that, or is there any talk of that already that you’re aware of? |
Vangelis Lympouridis |
There are many. What is important to understand is without regulation, there is no adoption. We need regulation. We need standardization. We need the framework for enterprise to adopt these, our services. And of course we need an ethical construct and ethical framework. So more and more, we see a return to ethics and ethics by design and understanding how ethics in the ethical framework is not an afterthought, but an initial thesis. |
Andrew Thomas |
Has to be built in from the beginning is what you’re saying. |
Vangelis Lympouridis |
Absolutely. And it’s down to the services also because if the ethical framework and the regulatory framework is how to fuel collaboration and activity, offer more experiential learning and drive the design of value, of future value across intercultural experiences. Then we have a solid framework to build the security layers and the layers of privacy. But if we don’t limit what is possible, and then we cannot define also what is possible, then we cannot frame how this should be designed and implemented. |
Andrew Thomas |
I do want to also circle back to a term you mentioned a little while ago, because I think it’s a relatively new term. You mentioned cosmo-localization. As I guess the kind of the way that transcreation was kind of an invented term for our industry in thinking about doing more than just translating, but transcreating typically marketing materials, but obviously it could be any kind of content. Can you just define for us what is cosmo-localization and how it maybe came to be as a term and how you think about it? |
Vangelis Lympouridis |
Absolutely. It’s probably about thinking of multimodal localization that meets a universality, the cosmos. This multimodal localization we touched upon is the idea that the agents and the agency of the world is in direct translation that is constant and feeding through all this extensive model of meaning making. And then the localization is about the cultural appropriation taking account cultural content and the context and do this transcreation of reality. So it’s a term to inspire. It’s a term to define. It’s an open term for future input until it gets established. It means specific things and more specific things and services, but for now it’s an excellent term to drive us further and frame what this is all about. |
Andrew Thomas |
So in your mind, cosmo-localization would be the appropriate service name if you will, for localizing metaversal experiences? Yes? |
Vangelis Lympouridis |
I think so. Absolutely. |
Andrew Thomas |
We love our terminology in the local industry. And so it makes sense that we would have a term coined for this unique approach, because it does seem like from, I mean, this is just from my layman’s perspective and tell me if I’m wrong. But to me from a localization industry point of view, this feels more evolutionary than revolutionary in that it’s combining a lot of the services that we already do today, but doing it all together, whereas today we might do them piecemeal, right? Because we already localize videos to various degrees. We obviously translate text. We do live interpretation. We have machine translation for user generated content and lots of different use cases. We have a lot of these individual services today, but they typically don’t all get rolled up into a single project. |
Andrew Thomas |
Like I said, this is why I go back to video games because video games is the one area where a lot of these things do get rolled into a singular project. But outside of that industry, a lot of times they’re different, right? You’ve got some marketing department or training department creating videos and you’ve got tech docs creating lots of words and you know, there’s different groups creating different kinds of content. And they do kind of all come together for a customer experience, but they’re all individual touch points whereas what you’re describing is really all of that content coming together into a single experience where it’s all coming together. Right. So am I correct in that? Or did I miss something? |
Vangelis Lympouridis |
Absolutely. And that’s, I think is a big opportunity for the community and also the RWS group. And what you’re offering is the expansion of capabilities and how the backend orchestration can enable this fusion on the fly and drive this unification that can create the relevance, the importance of relevance in order to see adoption of these services in what we were talking about. |
Andrew Thomas |
Yeah. So while this is going to be, I think, revolutionary for businesses and for consumers at some point down the road, I think for us, as you say, it’s really more of an opportunity to combine a lot of things that we already do well today. |
Andrew Thomas |
So just taking a step back and thinking about the industry as a whole, where do you think translators should be focusing the next several years of how, what skills they should be working on? What should language service providers be thinking about and making investments in? What areas of just, if you don’t mind, just think about the localization industry as a whole and briefly give some advice to anybody listening is if they were going to take advantage of this changing world, where should they be making their investments? Where should they be spending their time? |
Vangelis Lympouridis |
On getting clarity, on building elegant mind maps of where everything is and what it means. It’s a semantic orchestration of understanding how the space, where an experience virtual or synthetic or when it happens and what it means in terms of the agent to agent interaction, agent to space interaction and how important context is. Other than that, there are technical things to be addressed. But every analysis of that level can be done by anyone that starts thinking about the physical experience. That’s the beauty of it all, that these new technologies are closer to the human, the physical human experience. So it’s easier to translate your human to space experience and what does it mean, or your interpersonal experiences within context from your first person style point, fuel it through your professional know-how and voila, you have the emergence of what is important and how to get there versus other industries that might have bigger problems in this transition. Right. |
Andrew Thomas |
That makes sense. Is there anything important about cosmo-localization that we haven’t discussed yet? Any other points that you’d like to make? |
Vangelis Lympouridis |
I think the last point has to do with modularity in the polyformic nature of cosmo-localization. Everything is reusable. The initial perception is of something that’s being so lucid that you cannot embrace, but after a while, you understand that you can turn this into your advantage by building system that constantly recycle material. So this polymorphy and the core values of a system that needs to be in place in order to be scalable, that is based on modularity and usability in this polymorphic nature. I think it’s crucial in getting this right. |
Andrew Thomas |
So I want to dive into that a bit more with an example, if you don’t mind, because obviously localization has been focused on reuse forever since the introduction of terminology and translation memory, right? And clearly now we have a lot of reuse through trained machine translation models. There’s a lot of focus in this industry, particularly in the technology that supports the industry on reuse of translation reuse. But when you talk about modularity and reuse, do you have some specific examples that you can give us? |
Vangelis Lympouridis |
Absolutely, something from the space, for example, and how the notion of this space can inform context and vice versa. Spaces can be classified, walls can be textured and then by changing the dimensions you are in a small or a big place, you are in a place surrounded by glass or metal, but these things should be the primal elements of defining the space where the experience takes place. When you go from space to gesture, for example, gesture recognition and natural language processing, this in that these are elements that have cultural appropriation and then all these analysis and feedback loop between meaning and meaning making is something that can be constantly reused and reproduced and fine tuned. |
Andrew Thomas |
Clearly metadata becomes even more important in this scenario, but I would think this means that terminology even becomes more important than a lot of other linguistic tools at our disposal. Yeah. |
Vangelis Lympouridis |
You’re absolutely correct. |
Andrew Thomas |
Because basically what you’re describing is essentially tagging the spatial state with a term that can be dynamically changed as the space itself changes. Yeah. |
Vangelis Lympouridis |
Correct. Or the experience changes. |
Andrew Thomas |
The experience changes. Yeah. So as you say, as you go from glass to metal, the terminology describing your environment automatically changes. You’re not having to translate that from scratch, right. It’s just because it knows physically the state has changed therefore the terminology and how we describe the space changes. |
Vangelis Lympouridis |
Absolutely. And within that, I think a meta frontier of that layer will be to translate intent. What is the intent? What is the intent? And that can give us also a model of understanding intent in the physical world, because in the synthetic you can capture it. So if you capture it and you model it, then you can translate also in the physical, but in the physical world, it’s very difficult to capture and monitor and- |
Andrew Thomas |
And in the physical, you’re limited to where it’s physically located. And when it’s in digital, you can be anywhere in the world and experience that content the same way as anybody else in the world, which is the great advantage of digital. But so this is like, it’s combining the best of both worlds. You get the best of the spatial awareness that we have as human beings, as we live in a physical world, combined with all of the advantages of digital being anywhere, everywhere all the time. So, yeah, it makes total sense. |
Andrew Thomas |
Thanks everybody for listening today. I hope you enjoyed this as much as I did. It’s a fascinating topic that I could speak to Vangelis ad nauseam I’m sure we could probably talk about this for much, much longer, but thank you very much for sharing your words of wisdom with us and to all the listeners, thanks for listening. And we’ll see you in the next episode. |