top of page

Block Tides' Binance Live with Access Protocol

Updated: Feb 17

Binance Live with Access Protocol
Binance Live with Access Protocol

Making Waves on Binance Live – 4.4K Viewers Strong!

When AI meets Web3, groundbreaking conversations happen. On a high-impact Binance Live session, Myrtle Anne Ramos, CEO of Block Tides, and Andreas Nicolos, representing Access Protocol, took center stage to discuss the future of AI, blockchain, and decentralized content monetization. The session was a massive success, ranking Top 67 on the weekly leaderboard and earning a coveted spot on Binance Live’s homepage.


Key Takeaways from the Session:

AI & Web3 Synergy - How artificial intelligence is reshaping blockchain, creating new opportunities for data access and monetization.

Decentralized Content Economy - The power of token-gated access and staking-powered subscriptions that redefine content ownership.

No-Code Data Marketplaces - Simplifying data access through user-friendly, no-code platforms that drive innovation without technical barriers.

On-Chain Monetization - The next evolution of digital content revenue models and how Access Protocol is leading the charge.


Meet the Visionaries Behind Discussion

Myrtle Anne Ramos, the Founder & CEO of Block Tides, a powerhouse in Web3 marketing and PR. She’s a KOL @ CoinMarketCap and a Forbes Business Council Member, leading some of the biggest initiatives in blockchain adoption.

Andreas Nicolos of Access Protocol is pioneering on-chain monetization and transforming the way creators and publishers generate revenue.


Missed It? Watch the Replay!

For those who couldn’t tune in live, you can catch the full replay here.


Full Transcript of the Binance Live Session

Myrtle Anne: Hello, everyone. Can you? Can you hear me? Please press one. Let me know. If you can hear me, please press one. Let me just go ahead and check. So if you can notice that I have a cute background tonight. What? What should we name the cat behind me? So please suggest guys. Are we live on Facebook as well? Guys, guys, Marina, Marina. Hello!

Andreas: Hi!

Myrtle Anne: Alright, so we're back again. So Happy February guys. So even though I don't basically, really celebrate Valentines for some reason. So Happy Valentines for everyone, but I suggest do not celebrate anyway. So I have here Andreas again, and we'll be talking more. And that's, of course, the AI revolution that's happening so which should be able to really create when it comes to the AI transforming the crypto industry. So of course, how are you first Andreas?

Andreas: Doing great. Can't complain. Another new year. It's going well.

Myrtle Anne: Alright. And of course, another new year, guys. And of course, there's a lot of massive potential, of course, when it comes to events. So those who's basically going to Consensus, so let us know. So drop us a comment below on most of our socials. And of course, it's been a great, great eventS happening already. It's going back when it comes to the real builders instead of just GameFi guys. So yeah, so let's get, get actually, straight to the point. So number one question that I have, Andreas, so let's, we've already talked about the plans last live, right? But basically, I have in-depth questions when it comes to AI-related stuff. Because, of course, this intrigues a lot of people, and many people are not well versed. They just know ChatGPTs, right? So, so the first question, how is AI basically transforming the crypto industry? And of course, why is data accessibility so crucial, especially for Access Protocol?

Andreas: Yeah. So just the industry in like by and large, you can think of AI as basically just like lowering the barriers to entry for the average user. So doing complex tasks, complex tasks on chain, require a level of sophistication for people to actually engage in. You can think of like, high frequency traders, etc, etc, versus like, what maybe the average user is doing, just interacting through a phantom lot. So agents will, and I will say, a lot of this will be long term, not speculating, but believing in the long term impacts of this. We're not quite there yet, but agents will give the average user in crypto the sophistication of, you know, highly sophisticated like, say, high frequency trading shops or quant funds, etc. So really just kind of, I would say, leveling the playing field for the average user.

Myrtle Anne: So when it comes to this, of course, data as big as it could be, right? So data is the best ally, especially nowadays, in basically running a business, right? So why? It's very important for us to transform AIs, right now, despite of the GPTs, the models that's actually popping up here and there when it comes to Dali and then ChatGPT, Deep Seek. So I have an article to be posted later, by the way, guys in Access Protocol, so make sure to subscribe, by the way, when it comes to the exclusive contents and, of course, insight. So the question again, yeah.

Andreas: So for data, it's super important. We've kind of gone from this idea of having a sing singular, generalized agent that is going to do all of our tasks to more of the idea of specialized, verticalized agents. So instead of having one agent do all of your tasks, you're going to have 10 agents doing one task each, but be very, very good and specialized on those tasks. And it makes sense, right? If you have, you know, funds involved, you want to make sure that that agent is kind of really focused on one specific job in action and interacting with all the other agents that are doing other functions that are important for you. And when it comes to having those specialized agents, the first thing that you need is data. So if you want to understand and say, have a agent that's being a rug checker, for example, taking a look at all the projects that are interesting to you and evaluating it based on, you know, scale parameters that you set, you're going to want to make sure that that agent has access to data sets that are unique to that problem that you're trying to solve, and by kind of verticalizing it and keeping it specialized to that one specific topic, you know that you're going to have a higher fidelity output from that agent. You're going to have less hallucinations and just more control over that one task. So with that said, it's like we have, like, obviously ChatGPT and a bunch of other platforms that have a bunch of highly generalized information, but try to think of like a very specific, nuanced topic, and get real time information from them. And when I'm saying real time. I'm saying something that happened five seconds ago. You can't really do that. So having contextualized and actionable data that's accessible to your agent is going to be incredibly important for the average user to actually be able to manage and interact with an intelligent agent. So data, the importance of it cannot be like understated, and the better data, and the higher fidelity data you have, the more successful of an agent you're going to have, and the more productive you're going to be.

Myrtle Anne: Actually, yes, so I actually scolded my team, my staff behind the creative because, you know, they're generating the same AI generated caption for all of the things, socials. So I told them, guys, so AI might help when it comes to working smart. But of course, it can determine the information that you're actually putting, especially when it comes to captions right, because if you're not going to put the right data, it will actually generate a misinformation. Even I tested a ChatGPT at the same time and Deep Seek at the same time, because it's so fun. You know how you're actually, it's just like a pot of clay. You're molding it into a great ally. So it's, it's dual for me. So basically, what are you going to build? Is it something that will actually destroy you, or basically that's going to help you equip all of the things that you'll be needing, just like in mobile games, right? Because you don't have enough skills, that's the reason why you always lose so at the same time, it's just like AI for me. So MOBA, I compared it actually in one of my captions for post in Facebook. So I told them that it's just like basically MOBA game. If you're not going to incorporate a lot of information data, it will be useless. So thank you so much by the way Andreas, so that's the best analogy for us guys to basically determine how AI transforming the crypto industry. So when it comes to curators, of course, we inside Access Protocol as creators is very best. This is an additional question behind what we're talking about. How important is curation Andreas in all aspects?

Andres: Yeah, I mean curation is super important. I mean knowing and kind of having specific areas and topics that you're going to cover and talk about that you know are relevant to your audience is just like, massively valuable. So the insights that you can get from like what your audience wants and what they want to know, and what they enjoy reading, how much time they're spending consuming it is massive in terms of just what you're going to plan for your next month or so. So by not curating content and not thinking about what the end user wants and not having insights into that, you're really missing opportunities every time that you do publish work or content, and you can think of every time you publish content as a kind of at bat to make an impression on your audience and keep them engaged. So you should be really trying to maximize that every time that you are publishing and creating content, and whatever tools and abilities you can leverage to do that, you should be using.

Myrtle Anne: Actually, there's a lot of people already asking me, how do you actually program your AI Myrtle? Because there's a lot of curation already being generated that it's making my life so easier, right? So all of us are basically doing that, so making our lives easier through AI, but at the same time, the sword there, it's it can actually destroy something because, you know, we've been noticed by the clients that there's a lot of misinformation being generated through the GPTs caption that we're generating. It's just like, Guys, please don't sacrifice quality when it comes to generating and using AI, because, of course, there's a lot of things happening when it comes to this, right? So curation and data and of course, flawed predictions is very crucial, especially for what you're building an access protocol.

Andreas: 100%.

Myrtle Anne: So there's a lot of right even in media, there's a lot of media, it's on boarded already in the access protocol. So basically, it's very important for us to really curate and then, of course, data being handled properly, the machine is already there. That's going to help you guys, but make sure you input the right data right so the second question that I have for you, of course, so you're currently doing the AI agents and all. So what are the challenges that you actually have when it comes to sourcing reliable data? And of course, how Access can help?

Andreas: Yeah, so there's really, like several different types of deployers in the context of AI, and when we're thinking about like an AI agent versus ChatGPT, you kind of covered it before. It's like ChatGPT is more conversational generalized model, and it might require kind of input and feedback from you, whereas an agent is basically like taking in data and like evaluating its environment and making autonomous decisions based on its inputs. So under the kind of AI agent category, you really have like two types of deployers right now, and it's like the casual user who faces a significant amount of challenges because they aren't as sophisticated as the other category, which is like a sophisticated Deployer. The sophisticated deployer right now is probably, you know, obviously coding, but like evaluating, say, 10 to 20 different data sources and creating custom pipelining to help the agent inject that information and be able to understand and evaluate how that agent can come to an output based on the information that it's receiving. So you can imagine, like just the average person, even myself, having to try to juggle 20 different data inputs into an agent and then have it and wrangle it to do the actions that you want. So really, what we want to do at Access is we have a bunch of data partners and a bunch of media partners, and what we're building is the ability for anybody in the audience, yourself as well, to be able to select the data sources that they want. We prepare all of that data on the back end and input it into the agents. And you don't have to do any code, and within five minutes, you have a highly intelligent agent. So we're basically curating data, enabling data providers and media providers to give actionable and contextual data to agent deployers and monetize agents who, in my opinion, are going to be the number one consumers of content and information in the next five years.

Myrtle Anne: I see because, you know, there's a lot of token gating that's happening when it comes to premium contents, right? So I know that you're basically solving this, because when it comes to data guys, by the way, just so we know they're centralized. And then there's, there's the centralized data marketplaces, just so you're aware. So of course, when it comes to Access Protocol, so Andreas is really highlighting the benefit of handling it properly behind the scenes for us, right? So just imagine when it comes to the data, when it's when it comes to the subscriptions taking. I love actually, how you made it private for the leaderboard part, by the by the way. So I actually introduced this to the whole team on how access works, by the way. Like guys, the you can actually see that most of the leaderboards are basically here, but it's all private. So how you actually prioritize the privacy of someone inside Access Protocol in a GamiFied way? It really astonished, astonished me last two weeks ago. It's just like, oh wow, because I tried, I checked all of the leaderboards, and there's a lot of subscriptions, guys, there's a lot of people subscribing and staking forever to the creators inside access. So I saw how seamless it is for you to be able to actually follow and then, of course, stake and then, of course, share your referral code when it comes for you to be basically inside the economy of Access Protocol. So good work on that, by the way, I explored along with my team.

Andreas: Thank you.

Myrtle Anne: There's a lot, and of course, when it comes to this, when it comes to the high quality data sets. So I know that this is not part of the question. So can you please provide the advantages when it comes to being part of Access, by the way.

Andreas: Sorry, I think you just cut off towards the end being part of what?

Myrtle Anne: Being part of Access Protocol. So when it comes to the advantages, when it comes to being a creator, a curator, a company. So what's the advantage?

Andreas: Yeah, so one naturally, like, we're basically letting anyone monetize users with baking so a way to kind of build, like a long term on chain relationship with your audience. And now what we're doing is also giving creators the ability to monetize agents. And I said earlier, it's my view that agents are going to outnumber users in terms of their existence in the world. And I kind of said earlier as well.

Myrtle Anne: I saw someone commented in my article, someone is just like, oh, wow, your content is like this, like that. So I saw one comment in my article, by the way.

Andreas: Oh, amazing. That's great. That's fantastic to hear. And yeah, basically it's like, if every person has five different agents that are doing, you know, specific actions for them, then that's already five times more content consumers for kind of creators to be able to monetize. So we basically want to give creators the ability to monetize agents and feed their information and content that's relevant into agents and enable them to then do actions, or kind of be the bridge, to be able to tell the end user All right, these are the things that you should know today. These are the things that are relevant to what you're doing on a daily basis, and saving users time, money and just overall bandwidth. So yeah, right now, our whole goal is building out those pipelines and tooling for anyone on access right now to one have their own agent that represents them and then consumes all the information that they are already subscribing to and giving them, like the quick hits and all of the relevant pieces. So you can almost think of it as like a hyper personalized newsletter every day, in a way, if that's something that kind of rings relevant to you.

Myrtle Anne: So guys, by the way, the best explanation here, if you're familiar with automation, way back the boom of AI automation is already happening, by the way, and there's a lot of APIs already communicating with each other. You just didn't know about it. And basically, I tried using, uh, generating my own GPT inChatGPT, by the way, but it's not monetized. That's where I think so sad. It's just like, wow, I built this, language, and then this GPT, and then I have nothing. So basically, that's the point there. If you've tried making your own GPT guys and Access Protocol, your agent will be monetized as per Andreas, by the way. So imagine the things that's going to do by your agent. How intellectual and clever your agent is. You might never know. You'll be paid by a lot of people, right? So it's just like subscription. Also, subscription is the best when it comes to the business models. Why it's a good idea, to be honest, because it's just perfectly aligned with what I'm doing right now. So yeah, so the next question that I have, so of course, there's a lot of models already, right? You mentioned Blockchain, Oracle networks, crowd sourced synthetic data models and all. So why Access Protocol is very superior. I don't want to call this superior how it's actually creatively unique when it comes to the deployers out there by the way.

Andreas: Yeah, 100% so I think the key thing that we've been kind of screaming about is data. If you look at all of the partners that we have on platform, and I can't announce anyone that is going to be our first launch partners for this new product, but I will tell you that it is like some of the best in the industry. And basically, yeah, it comes down to data, and having the accessibility to pipe that data into your agent is basically what's going to define, like, the fidelity of that agent's output. So you can imagine the best data sources in the industry that are readily available for anyone to interact with their to pipe into their agent and interact with and have positive outcomes at the end of the day, the differentiator is always going to be data, and the ability to deploy it in a no code environment is massive in terms of accelerating the use cases of agents for mass market. So right now, I don't know if anyone in platform has tried to deploy an agent on existing agent platforms and launchpads, but I would be very shocked if anyone is able to actually launch an intelligent agent that is specialized. So I would challenge anyone to do that without using any code, and I would say that it's incredibly difficult. So we, at the end of the day, want to give users the ability to deploy an agent with no code, with high fidelity data sources and industry leading intelligence, and do that in a matter of minutes, not a matter of months or years. So at the end of the day, no code, no barriers for agents is like really the most important thing that we can do.

Myrtle Anne: Because we understand as well that not everyone has the ability to actually study coding, right? I know me. I actually since grade school, the white computers, I know how to code already, since for Java, JavaScript and then C plus and then HTML. Actually, my daughter asked me, How are you actually coding stuff through HTML? So she asked me about anyway, yes, ever since grade school, we're being taught already here in the Philippines how to code in my little town in malabon city, guys, so when it comes to that. So we have a solid background already, but, but how about those who doesn't actually know how to code? So you're building great stuff out there that you don't need to code, we can take care of it, and, of course.

Andreas: Exactly.

Mytle Anne: To be able not to be taken advantage, advantage of just basically study the basics of coding, guys. So if Access Protocol will take care of everything for you, coding, no code. But of course, know the basics of coding because, of course, there's a lot of bad people taking an advantage of someone. So it's very best for you to put yourself out there and study knowledge. Is power, guys. So the next question that I have, of course, I'm really interested with this kind of tech stuff behind the scenes, so it's not for doing this in live. So the third one that I have, so Access is actually allowing data when it comes to media providers, right? So Block Tides is a media actually, I want to share something discreetly later, because I'm not yet allowed to do that. I have a massive data that can help the startup industry, by the way, in having access to funding. And it's a massive data.

Andreas: Amazing!

Myrtle Anne: I want to take care of it and at the same time. How secure is it, by the way, before I proceed in, basically asking the third question for you, how safe it is for the creators, the media, the companies, for them to deploy the data inside Access Protocol?

Andreas: Yeah, that's a super great question, and that's one of the things that we like originally thought of when we were doing this, because data is valuable, and it is really important to know that for anyone that is kind of monetizing their data on our platform, that that data isn't going to be misused. And the key thing that we're doing here is we are only sharing API keys with agents, and we're creating it in a secure environment where that API key can never be revealed to anybody that is interacting with the agent. So that's the first and foremost. The number one goal is like the agent has parameters that it can use within data sets, and it will not abuse those parameters, and it will not expose API keys to just average users, which then get, effectively, a valuable product for free. So that is the one thing that we've really thought of, is we are helping. At the end of the day, who's the client to the media companies, and the client is the agent, the benefactor is the deployer and the people interacting with the agent. But at the end of the day, we look at. It as the agent is the one that gets to access that data and synthesize that data and make action based on that data, and it isn't for the end consumer. So you can't just go and buy a API for your agent and then go plug it into your trading system at home and start like basically ripping all that data for your own personal use. It will only be used by the agent, and you will just be able to see the synthesized output of that. And that's really important because, like, we have so many issues with like data scraping and people training models on data that they never paid for, and we saw open ai do it, and it's a shame that they did it. They got caught with it as well. They trained their model on data that they didn't have licensing for and we want to avoid that and make sure that we're creating a direct line of monetization that is actually like honorable and adheres to the actual terms of use for each specific company that have their own different like rules and kind of licensing logistics that you have to follow.

Myrtle Anne: Of course, it's important for us to consider a lot of a lot of regulatory aspects, right? Because, of course, handling data, especially here in the Philippines, guys, just so you're aware that we'd have a Privacy Act already when it comes to data, and then how you actually store the information of your employees, and the safest way, because there's a lot of hacking happening, especially in the database of the governments. So it's very important for us to really tackle the issues happening behind the scenes, especially during this hike for AI emergence, right? So the next question, of course, is the fun part, how can actually media and other providers to monetize contents the detailed way dreas.

Andreas: Yeah, so there's two ways that we are able to do this. Is one, you can think of data like via APIs, and that could be like something like CoinmarketCap data, and that is kind of like price action data, historical data, and that's just in the context of numbers. So that would be something more along the lines of a combination of contextual data, where you can look backwards and understand trends, but then also actionable data, where you know data from one minute ago is incredibly valid, and say, a coin has gone up 10% and you're able to read and see that. But then there's also contextual data, and that could be in the form of like white papers, research interviews, background that gives you all this information that you may not know and get from actionable data. So for example, if the Bitcoin price goes up 20% over the course three months. That would be or actionable data, I suppose. But then why is it going up in three months? And being able to read through a bunch of other media providers to then understand the context of what's going on, and being able to say this happened on this day, this happened on this day, and strategic business. Bitcoin reserve was announced on this day. So you really need two types of data sets and really anything that is valuable for AI we'd want to monetize. So I'd say in the context of everything that we'd have on platform is one, just like raw number data, but then two, research analysis, interviews, generalized content that can back up those data in number points that you see from a company, say, like CoinmarketCap, that's really important as well. And that's what's really exciting about this, is naturally, we're going to be working with some larger companies at first. But what it really enables is for anybody to be able to create content that's valuable for AI to consume and to be able to monetize it, and as long as your content is you know, on point and valid and well done, you as an audience member or Myrtle would be able to monetize AI agents. If you're giving something that's differentiated, then you have the ability to tap into what is going to be the fastest growing consumer base in the world.

Myrtle Anne: Actually, I don't know, but I'm just grateful of being a Leo, because I love to instruct people, so that's why I know how to instruct data as well as for the agents that I have, guys, by the way.

Andreas:  That is me as well, by the way.

Myrtle Anne: Oh, okay, so perfect, guys, for you to be able to really have a massive What do you call this results for the GPTs and the data that you're asking from the Chat GPTs, perplexity and all or whatever AI agents that you're using, how you instruct them, how you actually educate the AI or the agent itself is the very best thing that you can ever do, just like a baby, how you actually make it grow into something powerful, something valuable. And of course, the question that I have. So you've been tackled basically when it comes to the importance of two data, right? So the question that I have, so when it comes to the marketplace, because there's a lot of marketplaces already happening, I mean, inside your marketplace Access Protocol, how important it is for an AI agent development and deployment, and it's associated with, how can media companies use AI APIs to reach a global AI driven customer base effortlessly, without any bias, by the way.

Andreas: Yeah, 100% So there's obviously data marketplaces for AI agents, and then there's obviously agent launch pads. And I think the biggest like disconnect between the two is the lift of work that you have to do to go to a marketplace, acquire that data and pipe it into the launch pad that you're having. Most of that is going to happen off launch pad. So I alluded to it earlier. It's really only for sophisticated people that know how to code at a very high level, or very like intelligent level. So we want to join the two, and we want to be the ones that prepare all of that data for ingestion, so that the agents come out and output high fidelity information and data. And honestly, a lot of the hallucinations and things that you see come as a result of mistakes that happen between a marketplace and a launchpad and the way that developers deploy and pipe that data. So we want to kind of reduce that risk by doing it all in one spot, a one stop shop for anybody to identify their data sources and plug in high fidelity outputs so that really like is the most important thing in terms of having consistent outputs from all of your agents and just reducing hallucination. It's not to say that hallucination won't happen. That has not been fixed just yet. I'm sure at the pace that AI is like, kind of moving like it will, but we do have stops in place to check for hallucinations on our back end and invalidate or validate data before an agent completely does an action. So there are, of course, going to be risks at first, and luckily, we won't be having like funds attached to these agents at least initially, until we know that you can actually deploy this in a safe way where the agent one won't hallucinate and read, you know, Bitcoin is down 10% we're out of our risk parameters we have to sell, when, in reality, maybe it's the opposite. So that's really important. And then the other one is also just the security parameters of agents being tricked into sending funds to other people as well. So that is an entirely different process that is going to require a bunch of safety and risk parameters, and I think that agents are going to operate in a way that is going to be much more similar to advanced custody solutions, where you have kind of whitelisted programs and wallets that the agent can interact with, and anything outside of that scope, it won't. So I won't be able to trick Myrtle's agent to sending me, you know, $20 so I can buy myself a nice meal tonight. That would be something that we would need to build the right safety and security parameters to avoid.

Myrtle Anne: So it's very important for what Andreas is actually telling right now, guys. So it means that, of course, there's a lot of we tend to complain when it comes to the biases out there, right especially in our in our daily lives. So when it comes to this, of course, an agent can be biased at the same time and can manipulate a lot of things if the creator is not that when it comes to integrity wise, right? Because it's up to the creator of the agent to have not a bias thing. You know, in creating gpts, it's very important. So how can you make sure that an agent is not biased? By the way, very important.

Andreas: Yeah, o for bias, it's interesting, our goal is to create a base level of a incredibly neutral agent, and only spit in data and information, but when you throw in contextual information, then you're leaning on the biases of the sources that you're pulling from. So you could just imagine, you know, you have one data source or information research source, that is much more left skewed, and another one that's right skewed. Our job is to give you the tools to be able to deploy whichever one you like, but us as humans are also inherently biased. And I do think that individuals will choose the biases that they have naturally. We don't want to deploy agents that are biased, but if a user wants to deploy an agent that's biased and that's their choice, then that's out of our control, and it's their data source that they select. What we need to make sure is that that agent is staying true to the information that it's following. So that means that if the agent is telling you number one or no, if the information source is telling you number one, we can't have the agent telling you number five. We just need to make sure that what's going in is what's going out. That's the most important thing by far, and that's what all the data preparation on our back end is doing, and using deterministic databases really helps with that. So that's the end goal on that point, but it is impossible for humans to not avoid biases in their preference of data sources or information sources, and as the data sets and information sets scale, you're going to have sources that are inherently biased in one direction or the other. And our only job and goal is to make sure that the agent doesn't skew whatever information it's taking. It says this horse is a horse, not this horse is now a hippopotamus.

Myrtle Anne: Yes, exactly, because there's a lot of preferences of each individuals, by the way, just like in a eyes, guys. So of course, it's up to us to basically use it in a nice way. And of course, the responsibility that Accesses Protocol is actually doing so good work on that, by the way. And of course, the last one, where can we follow you Andreas? And of course, three minutes left. So where are you going next for the upcoming events in crypto?

Andreas: Yep. So for upcoming events, I won't be there, but our head of BD will be there, Leo in Hong Kong. So Consensus, yes. Finally, so you can meet Leo in Hong Kong. We will be at Solana crossroads in Istanbul. I'll be there. I'll actually be doing a presentation on what we've been talking about today, just so your listeners know we actually never publicly talked about this. So this is they're getting a first insight into what we're building. Never tweeted about it.

Myrtle Anne: So guys, follow access.

Andreas: Yes, exactly all of you are getting some exclusive information. Follow us at Access Protocol. You'll get all of the updates. We should be doing some announcements in the coming weeks, and then kind of a real presentation in April at Solana crossroads in Istanbul. And then we'll be in Dubai for token 2049, so catch us there. Follow our social channels, and I'm @Andreas_nicolos.

Myrtle Anne: All right. Thank you so much, Andreas, so guys, we'll be giving away 20 subscriptions again. Thank you Andreas, and I'll see you again in the next live.

Andreas: Thank you so much.

Comments


bottom of page