Artificial intelligence for good with Tomer Zuker

Tomer Zuker, an executive in the AI space, answers our founder and CEO, Jordan Greaser's, tough questions about the future and the ethical and philosophical ramifications of Generative AI.
Listen on Apple Podcasts
Listen on Google Podcasts
Listen on Spotify

or add our feed URL to your podcast app of choice!

Show notes

Pandora’s box has been opened. Generative AI is all the buzz among salespeople, marketers, content creators, students, teachers, and many more.

Will it replace jobs? Is it considered cheating? What about copywriting laws?

In this episode, Tomer Zuker, VP of Marketing at D-ID, and RevOps Therapist and CEO and founder of Greaser Consulting, Jordan Greaser, sit down to tackle the philosophical and ethical implications of generative AI. 

And one thing is for sure, as Zuker, an executive in the AI space, shares: as the technology advances, those who learn to work with it are one step ahead of those who try to work against it.

Jordan  00:00

Hello, this is Jordan, the owner and CEO of Greaser Consulting. On today’s episode, we have Tomer who is a VP of Marketing for a generative AI-based company. And we’re going to be talking about the philosophical, ethical, sort of natural human instinct with all this AI talk. You’ll hear my undertones in this, that we talked a little bit about Pandora’s box. And listen, the technology is here. It’s, it’s out there; we can’t put it away. So what are we going to do with it? How do we think about it? Crew, lean into this one; the world is in, and is already in the midst of rapid change. And what matters to me in this is not that it’s changing. But it’s that these kinds of philosophical, ethical conversations are happening. I don’t have so much hope to think that if they’re happening, everything’s going to turn out okay 100% of the time, but I’m not okay if we’re just running ahead full speed, without stopping and thinking and discussing the ramifications, the countermeasures, the, you know, move forward, move back in all of this. And so, I’m really grateful for Tomer who hopped into this to… for someone who’s in the industry to be willing to have these conversations. Lean in, here we go.

Intro Jingle

Say you want some clarity in sales and marketing and SEP? Well, we have just the remedy: Our podcast, RevOps Therapy. Yeah.

Jordan  01:46

Hi, everyone. We have Tomer with us today. And I’ve asked him, just for everyone listening to know this… give a little longer introduction on things like background, history, hobbies, etc. because that really helps lay the credibility and the foundation I really believe to this conversation so Tomer, have at it.

Tomer  02:08

Yeah, hey Jordan. Thank you so much for inviting me. So I’m Tomer Zuker. I’m the VP of Marketing of a really cool startup named D-ID, D-dash-ID. It’s a startup that is, that is with the generative AI industry. So it’s, you know, the hottest thing right currently now in the in the industry; the company, founded in 2017, to be honest, as a cyber security company. So the founders of the company developed this kind of very unique platform that can mask images and photos, to prevent them be stolen by AI. And few years after, they understand that this kind of journey is going to be quite complicated. So they did a pivot, and used the same technology to do what D-ID does today. And what we are doing is offering a platform that can take one still photo of a human face and create them an avatar or digital, you know, person that can speak in more than almost 120 languages. It’s super, super cool. And we are, as a startup, offering a platform to businesses, mainly for marketing departments that can create how-to videos or product marketing videos, or any kind of communications with clients using digital people. Also, for learning and development use cases and also for customer experience. One of the things that we have developed recently is what we call conversational AI, or AI agent, or AI assistant to think about, about a call or a conversation with ChatGPT, or any kind of, you know, AI-based chat, but a smart one with a face and, I would say, personality. So this is what we do at D-ID. It’s fascinating. And as a content creator myself, I have a podcast as well and also run a huge community on LinkedIn and write a lot of articles and blog posts; I found that, I find that D-ID as a platform can enrich my own content. So I use that as well. You know, to add that, to give more, I would say like seasoning on top of the content we create, and we work with a lot of content creator across the globe.

Jordan  04:55

So obviously we got this for everybody listening. We have somebody who’s deeply entrenched in this industry. And I just, for everybody listening, I was upfront with Tomer ahead of time and said, “Hey, we’re going to ask some hard questions today, because this is really sort of the ethics, philosophical side of why we’re doing what we’re doing, how we’re doing it, the implications, and whatever else.” And I think by this point, everybody knows AI is bringing a seismic shift; you’re not going to stop it, it’s here; there’s going to be changes. And so this is just thinking through, you know, what are these implications are, what are we okay with? What are we not okay with? And, you know, right out of the gate, you talked about how what you’re doing as your own content creator, let alone you know, you’re a VP of Marketing and the business, you’re a content creator yourself, and you’re saying that it’s helping you season and make the content richer? So you’re not saying it’s taking away; you’re saying it’s making it better? Could you just talk a little bit about, from your perspective, why this is helping you as a content creator, as opposed to potentially taking away the value that you bring?

Tomer  06:10

Yeah, I think, you know, let’s zoom out and talk about the entire industry and the entire tools of generative AI, not specifically about AI video, like, what D-ID does. I, you know, as a content creator, myself, I see that as a way to augment my capabilities. So think about the type of usage of GPT for ideation, or for research, not only GPT, it can be Bard, it can be cloud; it can be any kind of this kind of AI-based chat. So it can speed up processes. And it’s also, once you create a dialogue with those tools, you can come up with very creative ideas; you know, people say that, you know, AI is not really intelligence; they are not really creative. But to be honest, they can bring a lot of creative ideas, they can like, even converge ideas or items from one domain with another domain and bring something completely new to the table. So I find myself, you know, using that, in order to find new angles or new perspective on the topics, and I really love to write. So the essence of my writing is very personal, you know, coming from my essence and my personality. But those kinds of tools, deck generation generators, or image generators, or AI video generators, add more layers, or more dimension to the, to the content I create.

Jordan  07:52

We’ve already had a massive explosion, even in the last 10 years, certainly the last five years of content. I mean, we have just as a society, global society, are just massive consumers of content. With that in mind, you know, there’s a huge market to consume. There’s also a big market to create. But as AI is getting involved in all of this, to your point, there’s so much exponential creation of content that’s going out there. And as much as you have that sort of burden inside of you that it’s an expression of myself to create, does it matter if you create, and now you’re so crowded out that there’s nobody to consume what you’ve created?

Tomer  08:39

Yeah, it’s a really good point, because, you know, it looks like, like the barriers of writing have been decreased. It’s easier to write; it’s easier now to create content. So potentially everybody can be a content creator. So there is like explosion, as you mentioned. But in order to create really great content, to excel in that, you need to know how to write. And I don’t see… I, at least for now, I don’t see any really good AI tool that can replace that. What I do see is a lot of mediocre content that is out there. So we don’t need more of that; we need really good content. So you need to have the technique. You need to know how to express yourself, you need to know to really understand your readers, you know, the target audience. And I think, in that, in that, in that sense, the humans still have an edge. This is like our competitive edge as human beings to really understand and relate to other people’s feelings. Because when you create a story or an article, you’re taking your reader to a journey in kind of imagination. This is like a really, you know, storytelling. And in order to craft a really good storytelling, you need to know how to relate to other people. So I’m not afraid of that. Yes, there is, and there will be more content in the world. But you know, the great content will be out there as well. And people will consume this type of content.

Jordan  10:32

Well, that’s coming from your VP of Marketing background, right, of, “hey, I know how to attract an audience”? But there’s a, I suppose you can make the argument of “Well, anybody who’s creating content has to know how to attract an audience. So the burden is on them to begin with,” is that right?

Tomer  10:50

Not necessarily. So, you know, I can see a lot of content out there, you know, coming from, you know, businesses or organizations and even content creators, they have something to say, okay, they do that. But it’s not really compelling. It’s not really interesting; it’s not really shifting your mind to another direction or to another stage. So this is what I’m saying. You can use GenAI tools in order to create great content, but you need to know how to work with them, you know, how to orchestrate them, in order to create really, really, really good content. I will say really, again.

Jordan  11:28

So we, I mean, we’re talking about sort of the creative side of this and your own personal agency in the writing and is, is, you know, generative AI causing issues with that. But even, on a very pragmatic sense, we’re zooming way out this conversation. I mean, there’s, there’s some real estimates out there today that say, within 20 years 50% of all jobs, just blanket all jobs. And, you know, is this right? Number, wrong number, whatever… it says, will be completely erased thanks to AI. So, for example, one of the things you’re talking about is a video assistant, which I’m not talking about your company here; it could be similar, but I have a friend who’s doing something that sounds somewhat similar, that, you know, it’s you can walk into Home Depot, or you can walk into Dick’s Sporting Goods, and there can be an AI assistant standing there that you can have a full conversation, which removes the need for you know, a customer associate. You can go into a hotel, and instead of a front desk worker, there’s an AI assistant that can, that can handle all that natural conversation. I mean, you can see how very sort of entry-level positions, I’m not saying you don’t have to be good to do them. But entry-level positions are going to be wiped out. Even mid-level positions are going to be wiped out. And so it’s the age-old debate of like, do you stop innovation for the sake of the working class? Or do you tell the working class “you need to retool”?

Tomer  13:05

Yeah, first of all, there is a concern. So it’s something that, you know, I’m acknowledge that, and I think it will be wise for all of us to acknowledge that, that this is like a very, I would say, even historical shift in the industry. I think, I think the good analogy for that is the Industrial Revolution back then, in the late 19th century, to think about these kinds of really old schools, professions that were like 200 years ago, before the Industrial Revolution. Some of the occupations, occupations, you know, are not with us anymore, right? You don’t see a lot of horses riding on the streets, right? And it’s natural. And, you know, alongside with that, you have seen, and we can see that also, even right now that the majority of roles have been adjusted to the new reality. They create new muscles. And yes, there will be some, you know, new roles that will pop out based on the new technology; it’s something that it’s really common, it’s really natural. Think about you know, few decades ago, okay, you created slides by you know, really drawing slides on projector, for projector, whatever, you didn’t have like slides or PowerPoint. So, you know, I think once there is a concrete need, and once the technology can bring real value, people will consume that, and it’s really hard to stop the advancement of technology. Because if one territory, one you know, area or region in the world will try to control this kind of advancement, they are going to lose their competitive edge from a national perspective. And in the beginning of the explosion of ChatGPT, we saw a few example of that, you know, even in New York, I think in January, they tried to stop using ChatGPT in schools. And it lasted, lasted till May because I assume they understood that you cannot stop the progress. Because people find way; people are very creative, they will find way to consume the technology, if it really brings them real value. So, yes, some occupations will be under hazard and under risk. And my recommendation for you know, those people is that, first of all, be aware of that, try to experiment the new tools, understand their limitation, because they do have a lot of limitations. And we need to be aware of that as well. And try to evolve yourself and adjust yourself to the new norm. I’ll give you an example. We, you mentioned, like marketing agencies, you know, advertisement agencies and production agencies, right? So the smartest agencies are already using generative AI tools in order to create content in scale, with lower, you know, cost; they increase their profits, and they can manage now, or it can handle now more customers, more clients with the same capacity. So there is a real value for them to use GenAI tools. And again, I see these kinds of shapes, and it’s happening right now. So it’s very interesting to see that in real life.

Jordan  16:48

So I have, I have two… everybody’s opinions are always shaped by their personal experiences in some way. Right? So I have, I have two things that come to my mind. One I’m more concerned about with the other, and I’m, who knows if I’ll even bring that out. But so the one… Okay, I just, I kind of laugh about this, I stepped out for a while to finish a PhD. And you know, you write this dissertation, it’s this labor of love. Sometimes actually, most of the time of hate… writing this thing, and you’re just like, laboring over every word and all this. And you go through all these hoops, all these justifications, whatever, you finish years of work. And then you know, for the fun of it, I could go to ChatGPT and say, “Hey, write me a dissertation of this many, this many words, that has this frame from this tone or whatever.” And just for everybody listening, that is not what I turned in. This is after my committee was done. I was approved. I did all the work myself. In no regard did I use ChatGPT. But my point is, oh my gosh, listen, it was like 60% of the way yeah; it needed massaging; it couldn’t use work, it could of used work and all kinds of things. But the point is 60% of years of work and effort in about 30 seconds, it spits out for me. And then if I massaged it… I didn’t because I thought I’m not even gonna go down this road because I might puke over by the side somewhere. But the point is, the reality is, is me, is Dr. Greaser, PhD, whatever. I have to adjust. Yeah, that’s just reality, right? Now, I don’t, I don’t think those pursuits are something more for nothing. To your point. I think there’s a lot that I’ve learned that I can guide things and work through things, whatever. But education is, is what do we do about this? Right? Research? What do we do with it about that? I’m not from the standpoint of “Well, hey, I had to go through the hard knocks, so does everybody else.” But I am simply recognizing here like geez, my educational journey would have been, I think, pretty different if chat GPT was a part of it, that’s not wrong. It just means it’s different. 

Tomer  19:05

That’s true. Think about how students or you know, pupils did research 30 years ago. Right? I remember myself as a kid, I went to a library; there is a place; it’s called library, you know, there was something like that, you know, and once you know Wikipedia or any kind of you know, internet, internet-based directory where whatever come to the word, there was like a huge, you know, objection from from from teachers from professors: “hey, now you don’t have to work really, really hard to get information. You don’t need to memorize that whatever.” But in the end, they understood they need to adjust because our skills as human beings also evolved. You don’t have to remember now, what you had to do like 100 years ago. It’s not a question of how to retrieve the information, it’s how to use that, or how to use the or how to retrieve that faster than others. So if GenAI tools can bring you value, I think the shift will be how to use those tools to take us to a different dimension. Yesterday, by the way, I attended in a panel around AI for, in education. So I was super impressed that head of, you know, faculty for entrepreneurship, and also from high schools in the country have started to embrace AI, as part of the syllabus of their courses, the training, so not all of them, but some of them have started to, to, to support their students to GenAI tools. So and I believe this is the right direction. So I, in the end, it’s going back to what you just mentioned mentioned. I think, there is a gap, a growing gap between the evolution of technology, and the evolution of our biology. And this is like a growing gap. And we need to close this gap because nobody can stop the advances, the advancement of technology. So we’ll have to find new ways to consume those, I think, really powerful and beneficial tools. So yes, for your PhD, for instance, you had, you had to have this kind of body of knowledge or understanding in order to use GenAI tools for, for the best, you know, otherwise, you know, garbage in, garbage out. If you don’t know how to ask the right questions, the output will be really, really… low-quality output. So I think it’s another set of tools that will adjust and use them.

Jordan  21:59

So this hits the concept of credibility for me, and, and I don’t mean credibility in the person who created but I’m saying the credibility and the work itself. And so you even mentioned, there is an onslaught of just really poor-quality content. And now we’ve created a mechanism to basically enable a bunch of folks. And this is gonna sound elitist to say it, but hear me out, okay? A bunch of folks with really poor-quality content, it could be even in they’re factually incorrect. Okay. It’s misinformation, it’s fake news. It’s what, whatever the term you want to, and now we can put, we can just drown out other maybe higher quality information, right? Because we can mass produce things at scale. Now. And that’s, this is the thing that’s kind of scares me a little bit is like videos can be created now that are realistic, okay, to your point, we can take one person an image of one person’s face and potentially create an entire video out of this, that you know, back in the day, it was, “Well, do you have a video evidence?” Well, now is the video evidence credible? And to your point, there’s always going to be you said this earlier of like, well, you know, the humans are beating the machines here. Every area of weakness that I’m talking about the machine is going to figure out how to make a video; they’re going to beat some system, and then some system is going to come back to better recognize a fake, and then they’re going to beat the system. But my point is, we can do this on a much larger scale. That’s true. And so now, what’s true, what’s false? And so like, to me, this is like we’ve opened Pandora’s box. You know, when that box is open, you can’t put it back in. That’s, that’s real. Like, the big question is we can debate all day long “should you have open Pandora’s box?” The reality is, it’s open, you can’t close it. Okay. But I have concerns as a father, like thinking about my daughters, like one image of my daughter, and now who knows what can happen with this, right? I have concerns about you know, just basic facts and information even. I’m not trying to sound like a conspiracy theorist, but like government information, news outlet information, school information, you know, we can take one little piece, and now we can extrapolate it into this whole narrative. And there can be an entire content machine behind it. The tool can be used for good, and it can be used for evil, but I’m simply saying the box is open and my gosh, what have we released upon humanity? 

Tomer  24:47

Yeah. Again, this, these are really for concerns, and I also concerned; I’m also a father to do two daughters and yeah, so that’s very… think about the way you can create you know, images and voices. So It’s something that we could do in the past. But it was super complex. It was very, you know, expensive. And you had to have a specific expertise, expertise to do that. Now, it’s available for the masses. So, yes, this is a risk. And first of all, we need to acknowledge that. I think the solution for that should be a mix of a few layers, I would say. First of all, and this is like maybe something that it’s still, like, open-ended, it’s something that is not common enough or standardized enough in the world, is to think about how to regulate the market, how to control that, what is good and what is bad. Because I think in the end, once a video that has been produced using AI tools is being recognized or, you know, marked as AI-based video, is like synthetic video, that’s okay. I think I think the concern is that we cannot understand what is right and what is wrong, what is fake and what is genuine. So, for instance, the, the company I work with D-ID, for any video, we, people, generate using our platform, there is a watermark of the logo of the company, D-ID, and if you are a paying customer, there is, there is a watermark of AI. And if you are like a big enterprise using our platform, you can remove the logo, but you sign a contract, that you have the rights to use, you know, the photos, and from time to time we shut down accounts if they’re not following our, you know, terms and conditions. In addition to that, we have this kind of moderation mechanism that block people to use famous people’s images. So we are trying, at least from our own responsibility to try to diminish the risk. I think the key word here is a trust, is trust. Because as you mentioned, if what is reality, right? Nobody wants to live in a fake world; you need to build these kinds of trust between, between you and your surroundings, I think this is the real challenge. And I think it will be a mixture of regulation and laws, and also the responsibility of the user themselves. And also the vendors, you know, the company that provide the platform. And all of, all of those, you know, processes or means are subjected, again, to the, to the use of the technology, because again, you cannot really block or limit the advancement; you can only you know, educate or increase the awareness of the risks and try to control that as much as you can.

Jordan  27:59

It’s like the, you know, the, the elderly, for example, that would get preyed upon or still do rather; something pops up on the computer, and it says, you know, Microsoft supports trying to call you; you call some number, and the next thing you know, you’re inputting your card and whatever. And so, you know, for somebody who grew up in the tech world, you know, whatever, you see that and you’re like “come on, that’s ridiculous,” but it’s still getting some folks. That’s true. The, the point, though, is, I think there’s going to need to be a ton of education on what is and isn’t out there. I mean, there are folks, you know, there’s phishing attempts today, right? A salesperson gets a email from somebody that says, “hey, here’s the signed contract,” and the salesperson goes, “I don’t remember talking to you. But I’ll take a signed contract,” right? You know, things get more and more nefarious as they go. But, you know, younger and younger in age like, younger and younger, kids, teenagers, whatever, need to be educated, definitely on what they say. And that’s… this, again, being the father of children, whatever. One of the things I know, there’s big implications on jobs; there’s nefarious things that can be used with video and all kinds of things. But, you know, a concern that I have, is, like children losing their innocence sooner and sooner, simply because we have to teach them to be more vigilant and aware of the world around them because of nefarious things that could be coming, and maybe I sound like a doom and gloom person and whatever else but like, I mean, all of a sudden, eight-year-old kids are having access to phones and you know, this whole world which is a lot more complicated, a lot more deceptive in a lot of ways, and I know tools can be used for good and I’m not here to say this is all evil and whatever. But wow, like there’s actually to me one thing to be ultimately looking into is the loss of innocence of our children in this, not just the retooling of the workforce,

Tomer  30:07

I think there is, there is a truth in what you’re saying. It’s a different childhood, I would say; it’s like a digital childhood augmented by AI, it’s different than the child I had, you know, many years ago. But, you know, children are really smart and are very curious. And, you know, a few months ago, we had AI summer camp, which we educated, you know, few experts in the market, I was, you know, part of the, this kind of group, we had around 20,000 kids and teenagers that were like starving to learn about the usage of GenAI tool, mainly, by the way, you know… and this kind of, you know, music generator and sound it because they’re really cool. So you can see, you can sense the curiosity, the imagination they use in order to get the most of those tools. But in the same time, yes, we need to invest more in the awareness; we need to invest more in, for instance, in critical thinking; don’t trust what you see, you know, cross-check, validate, this is actually true. Yes, it’s really different. When the time we, I know like, watch TV, and what we seen on TV was the 100% truth of the world. It’s never been like that. Okay. But now, it’s more complicated, because again, everybody can create content, and it will look like very genuine content. So you need to develop these kinds of senses, or new, you know, again, new form of critical thinking, in order to validate that the content you consume and see is genuine or not.

Jordan  31:57

Oh, Tomer, when we are at, at time for today. I felt like I could lead into this for an extra two or three hours. Yeah, I first off, I just want to say I appreciate you coming on, coming from an AI space and tackling. It’s, to me, I think it’s really important and really refreshing too that folks in charge of the development and the progression of this technology, to your point, is not going away. I want to know that folks are grappling with these ethical debates. I want to know that there’s philosophical underpinnings that are being considered as we move forward. And how do we, how do we manage it? How do we equip this? It’s a reality that we’re heading this direction, I just don’t want to go there blindly and then wake up one day and say, “Oh, I didn’t realize this would be a problem.” And I don’t think anyone’s that naive. But at the same time, it’s just it’s important, for me at least, to know and recognize that, hey, there’s at least folks thinking about this as it’s going forward. So thanks for hopping on and chatting about this.

Tomer  33:11

Sure. If I have one minute even if I leave you or, you know, our listener with another thought… we call that AI for good. Additionally, artificial intelligence for good. So what we are trying to do is leverage AI tools in that sense, the D-ID platform, in order to bring or to do good or giving back to the society. So for instance, we have people that have, you know, this kind of disability to communicate with others that they can’t speak, or autistic people or people that are paralyzed, or they create their own avatars and this is the way they interact with the world. And I have many, many really incredible stories of the use of avatars in order to create a new form of communications with the world. So in the end, I think there are some good side; there is some dark sides. The truth is somewhere in the middle. And I think all of us as a society, need to see the good and the bad, and to gain the tools and understanding how to manage the new reality that we are facing now.

Jordan  34:19

I think, Tomer, what you’ve just said in a nutshell is “buckle up, everyone. Get ready, like we’re on the rise. So buckle up.” Yeah, Tomer, thanks for coming on. And for our listeners today, thanks for, thanks for listening in. Thank you. So yeah.

34:42

Hot dog. That was a great episode. Thanks for listening. If you want to learn more about Greaser Consulting or any information you heard on today’s episode, visit us online at www.greaserconsulting.com. Be sure to click the Follow button and the bell icon to be notified on the latest here at RevOps Therapy. Thanks and see you real soon.

Share with your network