Replay: Artificial Intelligence (AI), deepfakes and cyber security
Artificial Intelligence (AI) is evolving rapidly, and so are its real-world impacts. From productivity boosters to emerging threats, we break down both the helpful and harmful applications shaping today's AI landscape.
View transcript
[Visual] The screen opens displaying a slide tile that says, ‘Cyber Smart Week, 6-12 October 2025’. The host, John Mollo appears in a small window at the top right-hand corner. Throughout the video, the webinar slides change to match what the speakers are discussing. At times John, the host will cut in – as each speaker speaks, they appear in the top corner.
The next slide reads, ‘The AI, Deepfakes and cyber security webinar will start soon’.
[Audio: Host/John] Happy Cyber Smart Week this week and thank you all for joining this webinar. We have 3 great speakers for you. Before I get into that, let me introduce myself. My name is John Mollo. I'm a team leader here at the NCSC, and I'll be emceeing this webinar.
If you have any questions throughout, there's a little Q&A box that you can use at the bottom of your screen. Just use that Q&A box to ask any questions, and we'll get to those at a natural point in the presentation. We'll hold those to the very end.
Now, as I said, today we have, three great speakers for you, so we have Dr. Andrew. Dr. Andrew is a Senior Lecturer and Program Director of Artificial Intelligence at Victoria University of Wellington, specializing in explainable and interdisciplinary AI. His research explores AI's social and ethical implications, with applications in ecology, legal decision making, and much more.
Dr. Andrew is an active science communicator, frequently contributing, contributing op-eds, interviews, and expert commentary on AI-related topics, so thank you and welcome, Andrew.
We also have Freisi. So Freisi is from the NCSC, and is an information security professional, with experience in security operations and insurance.
Throughout her career, Freisi has focused on raising the cyber resilience of government entities, and her background includes a master's in public administration, with a focus on homeland security and graduate certificate, and has a graduate certificate in cybersecurity as well.
Freisi joined the National Cybersecurity as a Principal Information Security Advisor and is in this capacity that Freisi provides technical and thought leadership over NCSC standards, guidance, and interagency engagements, and she has previously worked for the New Zealand Defence Force in The Centre for Internet Security.
And our third speaker is Elizabeth from the FMA. So, Elizabeth is a Senior Advisor at the Financial Markets Authority, working in the regulatory Services team. This team leads FMA's response to complaints about unlicensed market activities and scams and drives efforts to disrupt investment scams and raise public awareness, including new threats like AI and deepfakes. Before joining the FMA in 2023, Elizabeth held various regulatory compliance roles in the wider financial services sector in New Zealand and Europe. So, welcome, Elizabeth. I'll just cover off a little bit for you guys around Cyber Smart Week and what we're doing, and then I'll hand it over to our three fantastic speakers today.
[Slide change]
This Cyber Smart Week, we're encouraging people to take a moment to own it. Encouraging people to take time out of their day like they would of any other life admin, to make time to take some simple cybersecurity action to better protect themselves online.
You can see on the right there, we are encouraging folks to focus on a few key actions. The main two at the top there; creating long, strong, unique passwords, and turning on two-factor authentication. If they are taking those actions, then we know it does protect them against a lot of threats that we do see.
[Slide change]
Cyber Smart Week is brought to you by Own Your Online, so I do encourage you to, jump onto www.ownyouronline.govt.nz at some point and see what it's all about.
Because we do know if you are online, you're a potential target. Our research shows that over half of New Zealand and actually half of New Zealand businesses have experienced an attack over the last 6 months. And as we can see, they are becoming more and more prominent from these headlines that are popping up there. And the losses can be, can be quite substantial.
We know that New Zealanders lose $1.6 billion every year to online security threats, but as well as the financial harm, or financial impact, they're also stressful, time-consuming, they impact our mental health and our confidence to actually operate online.
And you personally might not have experienced a cyber attack. It might be through a company that has your data, so the company that has your data may have experienced a cyber attack, but it may then impact you because your information is now out there.
[Slide change]
So, a little bit about us. So, the National Cyber Security Centre (NCSC) is New Zealand Government's lead agency for cyber security, so we're there for everybody, from mum and dad internet users right through to medium businesses, large businesses, small businesses, and government agencies.
We've created Own Your Online as a resource, as an easy-to-understand platform that small businesses and individuals can go to, to help them understand cyber security a little bit more, make it digestible, and know where to start. So, like I said earlier, do check that out when you get a moment.
[Slide change]
We've also created a new tool, so howexposedami.co.nz was launched this week. What you can do here is enter your email address, and it will highlight any data breaches your email address has been in.
It'll highlight the breaches, it'll let you know what information has been exposed, then it'll also step you through what are the actions you can do to help protect yourself, so it is quite good there.
So I'm going to stop sharing my slides, and I'm going to allow Andrew to share his. So as I said, at any point, Tim, if you have questions, you can, just throw them into the Q&A box.
[Slide change]
And that way, we can get to them at the relevant time. But now, I'll hand over to Andrew.
[Audio: Speaker/Andrew] Kia ora koutou, thank you for having me here. It's amazing to see how many people we have on the call. It's great to see the interest. So, what I want to do is just give a very brief overview. I've been told I only have 15 minutes at most, so I will be concise, around cyber security and the age of AI. Just some fundamental ideas and things, and then our two speakers will give you some more detailed examples as well.
[Slide change]
So, John already introduced me, so I will skip the slide, which says what he said, but, in less lovely words.
[Slide change]
So, I want to start by just setting the scene a bit with what we mean when we talk about AI. So, obviously, since ChatGPT came around in the late 2022, we've all been quite concerned around things like large language models, deepfakes, these sorts of things, but many of you will notice AI has been around for a lot longer than that, and we have a whole class of AI algorithms that we can call traditional, or maybe a better word is predictive AI.
And this is all about things like doing classification tasks. So, for example, when you think about your emails classifying things as spam emails or not spam emails, that's actually an AI technique that's been around for, you know, 20 or 30 years. And these types of models are quite different from regenerative ones and have different considerations.
But they are trained in a similar way, in that we have a big data set, and then we have a model that we train to learn about as in that data set, so it can do that task effectively.
So, in that example of threat detection, for example, you may have a dataset that has some examples of previous attacks on the network, and you want the model to learn what network patterns are attacks and what ones are benign or normal traffic.
[Slide change]
So, when we talk about predictive AI, as I mentioned, things like spam filtering, this is all about AI for analysing existing data to find patterns, customer information, and make predictions. So, things like spam filtering, malware detection, network anomaly detection for transaction analysis. And this is, you know, sort of the bread and butter, I suppose, of a lot of cyber security work, and AI has been used in this space for quite some time, in a defensive way.
[Slide change]
But of course, we can also use it in a not-so-great way. With any AI technology, there is both good and bad usages. So malicious uses being things like trying to predict vulnerable targets in code, trying to find software flaws automatically, you know, the sort of automated, threat creation, malware that can learn to blend in, so we've seen much more sophisticated malware in the last 10 or 15 years, of course, compared to the good old days of internet, when you had to write your malware by hand. Not that I never got involved with that.
As well as other things, like predicting, you know, ways to get inside a network, and things like credential stuffing as well. I know I'm moving quite quickly, but I do want to make sure we can get through all of our speakers.
[Slide change]
In contrast, generative AI, and this is, again, what much of the general public think of when they think of AI, is something quite different. So, this is actually trained in a reasonably similar way to these other things I just described.
But the new thing here is that these models, or these AI systems, can create what looks like new content. So, when you think of ChatGPT, it can give you answers to questions, or it can write new stories, and we've also seen things like image creation and video creation in recent years, too.
And that opens a whole new can of rooms in terms of cybersecurity, as well, because these creative outputs can sort of fool humans much more easily than perhaps some of these older systems could do. So, both a breakthrough in a technical sense, but also a new category of risk that I think we'll discuss today as well.
[Slide change]
And the first of these is our large language models, so when you think of ChatGPT or Copilot, this is what is called a large language model. And these are basically… these are really, really big AI systems with billions of numbers inside them that can analyse text and generate new text based on what they've read.
So, when you're, you know, talking to ChatGPT, it's read the whole internet, just about. And so, based on what it's seen, it can then try and predict what the answer you might want is.
And if you're interested, this was enabled by this idea of a transformer. Oops, my slides aren't quite working, but watch is just a certain type of neural network that is much better at remembering or paying attention to certain inputs than older approaches. So our old language models would often forget what you were talking about earlier in the conversation, or in this context, earlier in a document, perhaps, whereas these ones have made them much more convincing and performant.
[Slide change]
The other type of Generative AI that will be quite relevant today is image generation, so when you think about deepfakes, and our second speaker will tell you more about that, this is where we have, either you can describe an image and then ask the machine to produce that for you, or you can ask it to edit an existing image and maybe change someone's face, or make them say something different. And this uses another technique called a diffusion model, which is kind of like a very loose analogy, it's kind of like tuning your TV, you start with a it's almost like an empty image, and it learns to make it more and more realistic until it gets its final image.
And again, really interesting technology, because it allows you to edit images. I mean, I can now be created, perhaps, in ways I couldn't be before, but also a new sort of category of vectors for things like fraud as well.
[Slide change]
So malicious uses in generative AI would be things like phishing attacks, right? So, I mean, we have seen a much more, subtle phishing attacks, much more targeted emails that you might get now when people want to, convince you that they’re from your bank or your insurance company.
Vishing is a sort of a newer term, but this idea of cloning voices. There's been a few cases where CFOs of companies have been sort of tricked into releasing money or approving financial transactions based on an artificial voice, as well as other issues like disinformation, propaganda, those wider concerns as well.
So, a lot of this is actually not new concerns, but just more scale, more realism, scaling things up, which makes the work that all these lovely people do even more important.
[Slide change]
Potentially some defensive uses, so a little bit harder, maybe, to find. Things like simulations, so you can make, sort of, realistic simulation for training, trying to analyse code and find vulnerabilities. summarizing threat warnings, all sorts of things. So we often talk about the malicious uses, but it's also a really good way to test your systems and sort of, you know, red hat testing as well.
[Slide change]
The question I get asked a lot, and I think it's quite relevant to this context too, is can we detect these the use of these systems. The short answer, in my opinion, is no. So, when, ChatGPT first came to scene, a lot of people at my university, we can just use this magic software, and it will detect students using AI, and it will solve the problem, but that's, you know, shown to not be very, reliable, and as this technology continues to improve further and further, it's only going to get harder.
These generative AI is trained to be human-like, it's designed to speak like us, make images that look like us, and so it's a very hard task to try and detect human versus machine when they get increasingly closer and closer together. So, we do need to be slightly pragmatic and say we're not going to be able to detect such things, and you have to assume that what you're seeing online more and more won't necessarily be, distinguishable as AI or human, unless we have some other ways of verifying it.
As well as false positives can be very damaging. If you accuse somebody of using AI for something malicious, and they didn't do it, then that can have a big impact on their reputation and potentially, you know, legal consequences as well.
[Slide change]
I feel like I'm just being a downer. I don't mean to be, but the more general risk of generative AI are things like hallucinations, bias, misuse, automation. And I mean, there are also, of course, a lot of potential benefits, right? I mean, it could potentially improve our productivity, we could spend less time writing emails or summarizing documents, we can code more efficiently, but I think the key message here is it's about using it in an appropriate way, and having all these safeguards in place to protect people, both within business and also in society more generally, from the bad actors that might get involved.
[Audio: Host/John]I think that's probably a good point to bring in this question there, Andrew. So, relating back to what you're talking about in terms of spam filtering, they sort of asked the spam filtering not based on a spam score, how is it AI? And I guess, you know, it's what you're alluding to, we're improving spam filtering now by incorporating AI into it.
[Audio: Speaker/Andrew] Yeah, yeah, so I guess your old-fashioned spam filtering would be things like looking at certain keywords, you know, if you say million dollars in an email, that's often spam. Whereas nowadays, we see more sophisticated spam filters that can, find other phrases or other signals, for example.
Yeah, we have things like who's emailed you before, whether or not that email address is associated with spam, or even other such things. And so using AI, you can sort of automatically learn some of those patterns in a way that's… can be more… you can be almost more agile, if you will, than that sort of older-fashioned approaches.
[Audio: Host/John] Yeah, I think it's really important, because like we saw in one of the earlier slides of all those data breaches, right? Folks can start to use those data breaches and use AI to make phishing attacks and spam emails more believable, right?
[Audio: Speaker/Andrew]
Yeah, and you can also use that as an educative tool, right? Here’s some good websites out there where you can, kind of make fake phishing attacks based on yourself, and then you can sort of learn to recognize how easy it is, and so I think it's also quite good for that use. So it can be a bit scary to see what's possible, but at the time, you know, knowledge is sort of the best defence in these things.
[Audio: Host/John] Awesome, cool, I'll let you carry on, but keep the questions coming, folks.
[Slide change]
[Audio: Speaker/Andrew]
Other concerns include data and privacy, so when you use any of these models, right, especially generative ones, you need to be very careful about where that data's going if you have a co-pilot license, then Microsoft says that the data will, sort of, remain. If you have an enterprise license, will remain within your walled garden, but if you're using the free versions, or even some of the paid versions of ChatGPT, they may be using your model for training.
There were some issues recently where people's conversations had been indexed on Google because they hadn't turned off the index function, and so you need to be quite careful about what systems you're using and where that data might be going, and who you trust.
[Slide change]
Sort of a more interesting one that's been developing is also that as we use the technologies more in a general sense, there's some evidence emerging that it might affect our ability on how we learn or how we think as people. So, if you outsource your critical thinking too much to these models, you can actually lose some of that ability to reason and to check what they might be doing.
And there's a wider concern as well around if we have generative AI models replacing some jobs or some tasks in the workforce, that that actually often targets, sort of, the junior jobs first, and there could be a bit of a career gap. So, I think as organizations and society, we need to think about what role it's going to play in how we help train people to use these technologies and to go into those higher roles as well, as the workforce changes a little bit.
[Slide change]
And so, some other mitigation strategies, so this idea of human in the loop is that if you have a high-stake use of AI, any, you know, sort of decision-making process that affects people, or involves money. You need some form of human loot, some form of oversight, so that people are checking those outputs and validating them. A new concept that's come around recently is this idea of the AI bill of materials, so if you've heard of the software bill of material is sort of SBOM, similar idea. You can sort of have a verified record of what model you're using, what version, where the data came from, who holds the data, you know, having good documentation and being able to show, show your work, kind of.
As well as, of course, the standard practice in cybersecurity adversarial testing, red teaming, continuous monitoring, auditing. You can even use AI to monitor AI if you wanted to, but then at some point, you probably get too deep in that abstraction, and it's very hard to actually figure out what's going on. So, I think, you know, being quite… making sure we test things before we deploy them, and we have good empirical testing and oversight on an ongoing basis as well.
[Slide change]
So I talk quite a lot, John. I don't know if it was too much of a rush, but if there's any other questions, I'm happy to answer some now, too.
[Audio: Host/John] Yeah, there's a couple of questions here that I might, fire your way, and we could probably get Elizabeth to bring up hers while I ask these. A lot of questions for you. We'll focus on a couple. If an employee enters sensitive client information on a public AI site, e.g. OpenAI, are there any safeguards from preventing such sensitive information from being cited in answers to other random users?
[Audio: Speaker/Andrew]
Yes and no. So OpenAI doesn't actually use what you give it to answer other people's queries. What they might do is use it to train a next version of their model, and so it could be incorporated into future versions and leaked that way. The other risk is that, sometimes the chats themselves can be shared online, especially if you don't disable the right setting, and so you've got to be aware of that as well.
But those are the two main things. And then also, of course, it depends how much you trust what OpenAI say about how they use your data. And they do have terms and conditions, but you may be suspicious as to whether or not they might sell some of that information, especially if it's in a different country, so I would be cautious, and there are paid versions, as we said, who offer higher protections, but again, if it's very sensitive, you need to be really, really careful, and make sure you have proper consent and transparency.
[Audio: Host/John] Cool. And one more question there for you, Andrew. Do you see the field of AI assurance growing in the coming years, as many more businesses implement Gen AI into their processes? Would assurance over our LLMs become a key process?
[Audio: Speaker/Andrew]
Yes, yeah, short answer is yes, I think. It's such a new technology that people are often not aware of some of the some of the issues that could happen, and so I can really see a growing need for people who have the expertise to, audit and to check and to assure these things. And I think part of procurement going forward as well is going to be really focused on what the vendors can guarantee, and also who is responsible for harms, if they occur in some of those conversations.
[Audio: Host/John] Awesome. Cool. Thank you very much, Andrew. For that, I'll now, hand over to Elizabeth from the FMA, and she can, share her slides, and she'll just run through a little bit what they are seeing in regards to AI and deepfakes.
[Slide change]
[Audio: Speaker/Elizabeth] Thank you, John. Thank you, Andrew. And thank you all for, joining us this afternoon. John already gave a short introduction on who I am. So, I'm Elizabeth. I am with, the FMA, working in the Regulatory Services team.
And this is a little bit of a background to those who may not be familiar with the financial markets authority, the FMA, we are the Financial Markets conduct regulator in New Zealand, and our role is to promote fair, efficient, and transparent financial markets that work for all New Zealanders.
Our goal is to make sure that New Zealand businesses, consumers, and investors believe the financial services sector works well for them.
[Slide change]
And as part of this, we look at the interests of all New Zealanders engaging with the financial sector. We want investors and consumers to make good decisions, know their rights, and also know how to protect themselves.
And investment scams pose a serious threat to us, which means that we play a key role against investment scams. So, we get a lot of reports of investment scams on a daily basis, and through this, we've seen how these scams are evolving rapidly.
And unfortunately, we see in our line of work the concerning development of AI being used maliciously to promote investment scams, with scammers creating very convincing material at a large scale to deceive people.
[Slide change]
So, if anyone still thinks of scams as emails from Nigerian princes, or letters and emails full of spelling mistakes, those days are gone. Scam operations have evolved into very sophisticated, large-scale businesses that run like professional companies, and in many cases, run by organized crime networks.
They use call centres for trained staff to manipulate victims. They use digital marketing techniques like paid social media ads and search engine optimization, and some even offer scams as a service on the dark web, with complete scam kits, including fake IDs, email templates, and scam websites being offered, so that anyone can carry out a scam.
And it's a lucrative industry, with scammers claiming hundreds of thousands of victims every year. And a major driver of this success is their use of technology, especially AI. Which has fundamentally changed how scams are created, and how they are spread.
And AI has made scans faster and cheaper to set up, more convincing by generating targeted messages, fake documents, emails, videos, and entire websites in minutes. And they're harder to detect. Where we used to advise people to look out for poor grammar, or strange language, or bad Photoshop, AI now removes those red flags with ease. And with AI, scammers can rapidly and easily change their scam tactics and launch new scams.
They are able to scale their operations across multiple countries in one go, and target specific groups of people based on their age, interests, or even online behaviour. And now with deepfakes, we're seeing a new level of deception.
Scammers are creating fake videos of celebrities, politicians, or well-known business leaders promoting investment scams.
And for some, these will look and sound very real, and they make it even harder for people to know what's genuine.
[Slide change]
So, what does it actually look like when scammers use AI to help them? One of the most common uses of AI we are seeing is with the mass production of scam websites.
These scam websites often feature AI-generated branding and content with professional-looking names, logos, and layouts, and AI-generated text that is convincing and looks professional but is often generic and purposely vague about what the website is actually offering.
They use stock photos or AI-generated images to create fake staff profiles and build false credibility. And they often have chatbots that are powered by AI to answer questions in real time, to gain trust and harvest sensitive information from users.
You'll also see fake testimonials and reviews, often fabricated by AI and paired with fake user accounts.
And what's especially concerning is how easily these sites can be replicated. Scammers use website templates that can be duplicated hundreds of times with minimal effort.
And with the only slight change being the logos, maybe the entity names, and content to match with the countries, languages, or demographics they are targeting.
So, on your screen right now, you see an example of a duplicate website.
So, while you here see the same template used for two scam websites, or for scam entities, BLG Finance and ATFX, we found close to 150 copy websites, all published around the same time.
These websites used the same template, the only difference were the name, logos, and sometimes the colour schemes.
So, this kind of mass production of scam website is made possible by AI, and it allows scammers to scale quickly and target victims across multiple regions with very little cost or effort.
And it also makes it possible for their scam to survive. It's when you take down one domain, 99 remain.
[Slide change]
We've also seen a surge of false ads being launched across social media, so these ads often use clickbait headlines designed to grab attention and mislead people into believing fake information.
In the same sensationalist headlines are often reused across multiple ads. For example, we've seen identical headlines falsely claiming that several public figures have been arrested due to providing investment tips on a popular talk show or that the New Zealand super has been abolished. The aim here is to deliberately spread disinformation and cause public concerns or fears. Now, we've seen a lot of these ads. I've seen a lot of them on my personal social media pages as well, and I'm curious to know if any of you have seen these types of ads on your social media as well.
And if you have, put it in the chat, tell us what you saw, who was featured in the ad, and what was the ad saying.
[Slide change]
So, we'll just get back to that later, but these ads are often published by sock puppet accounts, which are fake profiles presenting to be real people or hijacked accounts. And these ads often misuse real images or AI-generated photos of celebrities, politicians, and other public figures.
The AI-generated images are also realistic, but slightly off, and they're used to lend credibility to fake scam promotions.
And many of these ads link to fake news articles, like the ones you're seeing right now, on websites that mimic the look of a legitimate news outlet making them appear more trustworthy and further spread disinformation.
They use AI to imitate the tone, formatting, and style of the news outlet, and write a convincing story relevant to the country they're targeting.
And what's especially concerning here is the scale. Scammers launched dozens of ads and articles at once, swapping out faces and names, but keeping the same headline.
And they do their homework. They research their target country, which means that when they are targeting New Zealand, they know which public figures are trusted, or popular, and what is currently happening in New Zealand's news.
It's basically a copy-paste strategy designed to flood feeds and reach as many people as possible.
And ultimately, the goal is to drive people to scam websites, where they're asked to sign up, share personal information, and eventually hand over money. But before I move on, John, are there any responses in the chat?
[Audio: Host/John] Here’s one that just popped in. Someone said Hilary Barry endorsing a diet supplement on Facebook. And I think our Prime Minister's also been impersonated a whole bunch of times as well. And those ads they appear on news websites that we go to. And websites that we trust, but the ads aren't always necessarily legitimate, so there’s some good examples there.
[Audio: Speaker/Elizabeth] Yeah, that's really good. We've definitely seen the Hillary Barry ads as well. What they tend to do is stay focused on one person and release many different types of scam as it could be diet supplements, but at the same time, you'll see her promoting investments as well.
[Audio: Host/John] And we've got, yeah, Adele promoting the diet, and Oprah as well, so yeah, the diet one seems to be a popular one.
[Audio: Speaker/Elizabeth] Yeah, yeah. So, following the fake social media ads, as Andrew has already given some information on what deepfakes are, I can tell you we see a lot of the deepfakes, increasingly used to promote fake investment schemes as well.
[Slide change – this slide has been redacted due to unauthorised sharing]
These aren’t just videos of made-up people. Increasingly, they feature real, well-known individuals, like celebrities, politicians, trusted public figures, and they can even feature someone you know personally.
Often, scammers will design it so that it mimics or manipulates the appearance, voice, speech patterns and mannerisms, and even the emotional tone of the person. And the goal is simple, to make the scam look convincing and credible. So, while some deepfakes are still easy to spot, we're seeing rapid improvements here.
As AI technology evolves and scammers get better at using it, these videos are becoming harder to detect.
So, I'm just gonna show you a few clips of some deepfake videos. These were all used to promote investment scams and have circulated on different platforms. Some will be more convincing than others, and what I want you to do is, as you watch it, I want you to imagine you're scrolling through your social media feed, and one of these videos pops up.
What would stand out for you to suggest that you are looking at a fake? And write your answers in the chat.
[Audio shared] Hi everyone, I'm Carmel Fisher. My goal is to help more investors, so I've created a WhatsApp investing group. Last week, two stocks I shared with the group soared 88% and 112%, respectively. If you don't want to miss out, join my investing group.
[Audio shared] With at least $400 in their account can now use asset trading to multiply their investment tenfold in a week, thanks to a new advanced trading platform. Thanks to artificial intelligence technology.
[Audio shared] My new project will be available to citizens of New Zealand for a limited time. Developed powerful software. This software is the first of its kind in the world. Personally, I invested $5 billion in its development over 2 years.
[Audio shared] Prime Minister, recently, New Zealanders have seen many advertisements featuring you personally, where you guarantee an income of $35,000 per month through the Quantum AI platform. Some are calling it a scam, while others claim they've earned even more than $35,000. Who should people believe if they want to join and start receiving passive income automatically?
[Audio shared] Don't click on suspicious links. The link below this video is official, and by using it, you truly can earn, I guarantee it. The project is supported by the New Zealand government and the Cabinet.
[Audio shared] Invest $450 New Zealand dollars in this project and are guaranteed to earn $70,000 on the very first day. I am Jacinda Arden, and together with the most influential people in New Zealand. We are launching a new financial project for Kiwis that will completely change your understanding of making money.
[Audio shared] Amy, why do you promote your platform to the masses? Why should people trust you?
[Audio shared] First of all, I want to say that I myself have invested $900 million in this platform, and I see results that have not been long in coming. I need a lot of savings to get started. Just $450 New Zealand dollars is enough to start trading. It is not a very large sum, but it can change people's lives and destinies, trust me.
[Slide change]
[Audio: Host/John] Yes, we've got some comments in there, that the synchronization of the lips and the voice is off. Bit jerking, the frame rate is a bit funny, is that right, Elizabeth? And someone's pointed out Peter's Jackson accent is wrong, doesn't sound like that.
[Audio: Speaker/Elizabeth] Very good. Yeah, those are really, really good indicators, and I'm happy that some of you picked up on that. Yeah, so one of the key things to really pay close attention to when you're watching a video is to check the facial movements, and also, like someone said, the synchronization of the voice and the lips. Often that is not completely perfect yet.
There's also a lot of stiff, don't know if you noticed that, but stiff and unnatural expression, some awkward blinking happening as well, which can be a really good sign that you're watching a fake.
You might have also seen some straight lighting, or shadows, or, skin that's maybe a bit too pixelated. And in general, if you're, if you're looking at an image or a video, check how the skin is looking, because AI still struggles with realistic skin details. Sometimes the skin is too smooth or way too wrinkly. And also, look at the body and its movement, so are there any extra body parts that shouldn't be there, or any body parts missing. Are the hands moving naturally, as you would expect of the person? I think in some of these videos, the hands were moving a lot, which would kind of be unnatural.
And as someone said, the accent is not always right, so listen carefully if you're watching a video.
It's either the voice might sound slightly off, like the tone or the pacing, and especially in New Zealand, listening to the accent is really key, because AI still has a lot of issues with mimicking the Kiwi accent, which is very helpful as, something to pick out.
But if you're still unsure whether what you're seeing is real or fake.
There are a few other things you can do. So, do a sense check.
Does it sound out of character, overly emotional, or manipulative? Is this something that the person would realistically say? For example, does it really make sense that Peter Jackson is trying to sell you an investment?
And also cross-check the information. Use Google or other search engines to see if it is backed up by other reliable sources.
You can also use online reverse image or video search tools to find the original content. And most importantly, if you are still unsure, talk to someone you trust. Get a second opinion, which can make all the difference. And like Andrew said, defects are only going to get harder to detect as your technology improves, so staying alert and informed is key.
[Slide change]
It's not all doom and gloom here, either. While scammers have more smart tricks thanks to AI, the FMA is also evolving our response to disruptive work.
We carry out proactive scanning across multiple platforms like Meta and Google Ads, and we monitor for duplicate websites using keyword searches and online tools. We're also starting to use social listening tools to detect harmful content on social media.
And when we identify scam websites, we engage directly with domain registrars to get them taken down. We will also work closely with the NCSC [The National Cyber Security Centre) to remove harmful content and share intelligence.
And we also engage directly with social media platforms, which is unfortunately where we see a large part of these AI-driven scams appearing. We share scam trends we're seeing and report harmful content directly to get them taken out. This also helps the platform to respond faster and more effectively.
And these disruption activities give us speed, control, and visibility over the outcome, because the faster we act, the fewer people fall victim.
[Slide change]
Now, finally, I want to highlight one of our more, most effective tools in responding to scams, which is our public warnings.
So, these warnings explain our concerns about a particular scam, how it works, and are designed to alert and educate the public, and also businesses and other regulators. The goal is to prevent others from falling victim, but they're not just educational, they're also disruptive.
When we publish a warning, it damages the scammer's credibility, making it harder for them to attract new victims. We also use these warnings to support our efforts to take down malicious websites and order content linked to the scam.
And we publish these warnings on our website, which will be linked in the next slide, and across our social media channels. And when a scam is particularly harmful, we actively engage with the media to amplify the message.
We did this, for example, this week with the impersonation of business leaders and financial commentators on social media, which is linked to a widespread pump-and-dump scam.
[Slide change]
And importantly, our warnings carry weight. They're trusted and often used by other agencies and financial service providers to help convince clients that something is a scam.
The intelligence we share also supports overseas and domestic regulators in protecting their public and businesses by putting out warnings.
But to make it all possible, we do rely on reports from the public and businesses, whether it's a financial loss or a near miss. If scams aren't reported to us, we may not even know that they're happening.
Your reports help us act faster, disrupt scams, and publish warnings that protect others, including your friends, whānau and if you're a business, your clients. So, we take a no-wrong-door approach, which means that whether you report to us, or even NCSC, or another agency, it all contributes to collective efforts to fight scams.
[Slide change] Slide lists the following resources:
FMA Scam Warnings Warnings and alerts | Financial Markets Authority
FMA Scam Basics Scam basics | Financial Markets Authority
Subscribe to FMA news and alerts https://www.fma.govt.nz/subscribe-to-fma-news-and-alerts/
Report any investment scams or near misses to questions@fma.govt.nz 0800 434 566
[Audio: Speaker/Elizabeth] That brings us to the end of my part of the presentation.
But before I wrap up, I want to leave you with a few useful resources. As mentioned, our scam warnings are published on our website, so to stay up to date with the latest scam warnings, have a look at our warnings page, which is updated almost daily with scam information.
We also have a Scam Basics page for practical tips on how to spot a scam and what to do if you come across one.
And you can also sign up for our FMA news and warning alerts to automatically receive the latest scam trends and developments.
And most importantly, if you come across an investment scam, please report it. You can use the email address or phone number shown on your screen right now.
Every report helps us to act faster and protect others. Thank you.
[Audio: Host/John] Cool. Thanks very much there, Elizabeth. There's a lot of great content and tips for folks. So I'll now get Freisi to share her slides, so she can get those… get those up. And just while she's doing that, just a quick note to folks, answering a common question around, will the recording be shared? So we will share it, it'll probably be next week, there's just a bit for us to do in the back end, making sure that it is right and get it to you in the best form possible.
But now I'll, hand over to Freisi. Keep asking the questions, folks, we'll try to get to those at the end. Otherwise, speakers might even just jump in the Q&A and type some answers to some of those questions, that you got as well. But, yeah, handing over to you, Freisi.
[Audio: Speaker/Freisi] Thanks so much. Am I coming through loud and clear?
[Audio: Host/John] Yep, you are.
[Audio: Speaker/Freisi] A: Awesome. Am I, stiff, or awkward, or, is there bad lighting in this room?
[Audio: Host/John] That's good.
[Audio: Speaker/Freisi] Sorry, folks, just wanting to make the case that I'm not a deepfake despite those things, and not having a standard Kiwi accent. I won't offer you any financial advice, but I will offer you some cybersecurity advice.
So, thanks to both of the previous speakers. I'm not going to rehash any of the content that they so eloquently, walked us through, so let's see if I can save us some time.
So, Kia ora koutou, ko Freisi tōku ingoa. I'm the lucky last presenter, and I'm going to just really quickly walk you through the NCSC. I'm assuming that if you're here, you know us and you love us, but I want to bring your attention to our three Tohu, which are essentially our guiding principles.
The first one being, we support everyone in New Zealand to act on informed decisions. That's essentially why we're here today, talking to you about AI and AI risks and opportunities.
In terms of my work portfolio…
[Audio: Host/John] Freisi, did you want to share some slides on the screen, or did you want me to share them for you?
Freisi A: Let's see, are they sharing through now?
[Audio: Host/John] No, they don't look like they are sharing at the moment. I think I've got them here, though, I can pull them up for you. Just 2 seconds.
[Audio: Speaker/Freisi] A: Let's see, did I figure this out?
[Audio: Host/John] Oh, you can see the PowerPoint application now.
[Audio: Speaker/Freisi] Sweet, and then how about now?
[Audio: Host/John] Yep, perfect.
[Audio: Speaker/Freisi] Is it? Okay, great. I like to act as my own tech support every now and then.
[Audio: Speaker/Freisi] Right, where was I? Our Tohu. So, essentially, why we're here is to help you act on informed decisions. Aside from that, we do work with key players to build in the basics, because the onus should not just be on the individual. We do have to look at those that are upstream and ensure that they're doing their part.
[Slide change]
[Audio: Speaker/Freisi] Right, where was I? Our Tohu. So, essentially, why we're here is to help you act on informed decisions. Aside from that, we do work with key players to build in the basics, because the onus should not just be on the individual. We do have to look at those that are upstream and ensure that they're doing their part.
Security is a shared responsibility. And then lastly, we really do our best to counter the most serious harms, so there are limits to what individuals and even enterprises can do. We have a strategic cyber security role, and using our unique insights, our unique position in the ecosystem, we help you and all of New Zealand counter these threats.
[Slide change]
[Audio: Speaker/Freisi] Good cybersecurity starts with a risk assessment.
Essentially, a risk assessment gives you a moment to pause and consider what you deem acceptable and what you need to do in order to protect your data. Some people don't really like the sound of a risk assessment. It's something that is more frequently seen within sort of like large enterprises, you can think your certification and accreditation, which is generally uninteresting to most.
But they are useful. I'll quickly point out 3 quick questions that you should be asking yourself when it comes to AI risks. And you'll notice I don't even include AI here; I'm just talking about data.
That's essentially what this is about, is what is happening to data.
Do you understand what happens with your data once it's shared with a tool?
Are you comfortable with what you're sharing? And what assurance do you have that the tool is sufficiently protecting your data?
The previous presenters talked to you a bit about what can happen with data and how it can be used maliciously.
I think that once you have that fundamental understanding, you can go about putting into practice the mitigations you need in order to ensure that you're working within your own risk appetite.
These don't cover everything, but they're a pretty good starting point.
[Slide change]
[Audio: Speaker/Freisi] I'm going to walk you through some additional fundamentals that might help you understand, the focus areas here.
So, I have got an AI implementation taxonomy. I have borrowed this from some folks and swapped it around for our use.
The focus here really are on consumer apps or enterprise apps. So a consumer app, that's one you're probably most familiar with. That'll be your ChatGPT, your Midjourney, your Bard, your Gemini.
Next to it, I have an example of a common risk, risk being data exposure breaches. And beside it, I have mitigations, which is understanding what you potentially have in terms of protection in place based on terms of service.
And then what you can do yourself by limiting the amount of data that you choose to expose.
The next one, the Enterprise app, is one that you might encounter at work.
It's your sales force, or your co-pilot, and with that, there are other risks, right? Because it has access to a different kind of information, so you have to think about jurisdictional risk, data sovereignty, breaches to privacy or loss of IP.
The mitigations are very similar, but usually there's an entire team of people that you can rely on to go through and read a Terms of Service, and to create policy, and to monitor for anything going wrong.
Beside it, I have an arrow pointing downwards, which just exemplifies the fact that as you go through different implementation types, you have an additional degree of control over your data and how it's being treated.
The last one being a from scratch version that lets you determine what information your AI models have access to, and how it's interpreted.
So those last three options require the most effort, and, it might suit the purpose of a very large enterprise.
But if you are an AI enthusiast, it might be that you download your own models and you self-host them as well.
Oh, lastly, a little bit of a disclaimer, I'm definitely not plugging any products or discouraging the use of any products at any point throughout this presentation.
[Slide change]
[Audio: Speaker/Freisi] So, security people love a good triad.
I've limited myself to 3 for this presentation.
Andrew and Elizabeth have already sort of touched upon these here and there, but I'm gonna just bring it all together because, good things come in threes.
The first one you're familiar with, and it's a fundamental security model that serves as a framework for understanding how risks and mitigations affect data. That's your CIA triad, confidentiality, integrity, and availability.
Security considerations are highly dependent on the implementation type.
In this case, Confidentiality and integrity when thinking about AI.
Is one of the most important.
Because confidentiality ensures that data is only accessible to authorized individuals.
And since AI relies on the aggregation and analysis of large datasets, it means that there’s a lot of movement of data and potentially sensitive data, could be exposed.
Andrew actually beat me to the punch, and he mentioned the fact that a few months ago, security researchers identified that ChatGPT private chats were being indexed on search engines like Google and Bing, to name a few.
Users were inadvertently making their chats publicly available.
However, it was only after, OpenAI received a bit of backlash that they chose to remove both the search results and the option from users to expose those private chats.
An example of the security responsibility being shared there.
Next, I've highlighted in green is integrity. Integrity is incredibly essential. It's usually considered the… sort of like the poor cousin when it comes to, the CIA triad.
Integrity is important because it maintains the accuracy and trustworthiness of data.
In order to get the most and maximum value out of your AI, you have to be able to trust your data. Without it, you're just generating unuseful results.
In case you're wondering why there's a random seahorse emoji on this slide, it's not because I used AI to put this together. It's to point you to something that amuses me, which is, right now, security researchers are pointing out the fact that AI is experiencing hallucinations.
If you ask your preferred Gen AI app to pull forward a seahorse emoji, it will tell you that one exists, because it's eager to please.
However, in the Unicode standard, there is no such thing. You can pull open your phone right now and try to search for a seahorse emoji. It does not exist. AI will tell you otherwise. This is very similar to the Mandela Effect that people experience.
Just to point out that, again, what you get out of these, apps, tools, as intelligent as they may appear. They’re not actually all that great at everything.
The second set of, of 3 that I have here covers both risks and opportunities. I'm not going to go too into depth, but essentially, we've talked to you a bit about the security of AI, and then also keeping yourself secure from AI.
There's also the opportunity to use AI for security, but that's something that's more within the reach for those who are in larger organizations.
Lastly, I'll leave you with security mitigations, and you'll note that I've only highlighted, people.
312
This is because… When it comes to mitigations, people are the most important element.
People follow processes, and they use technology, in order to do what they do every day, and it's important that they know how to do that securely.
[Slide change]
So, before we wrap up, I just want to leave you with a few key messages and a couple fun activities.
That you might want to do with your whānau or colleagues instead of doing the morning quizzes.
The first one is a, is a game, it's called, Can You Trick Gandalf?
And the whole objective is to trick Gandalf into revealing the secret password.
It's a very simple way to practice your prompt injection skills.
And it has, about 8 levels.
I'll only give you one hint, which is it pays to ask nicely.
It often gets you the best results with Gandalf.
This is a really good way of teaching yourself, and also kids, anybody within your close circles.
Some of the failings of AI in keeping data secure. It's very easy to use AI for social engineering, but it's equally, or sometimes easy to social engineer AI as well.
The second activity is one that you'll have to find for yourself. I couldn't find a really good, cute tool for this one, but I recommend that you try your hand at identifying deepfakes.
Just to give yourself an idea of how well you perform.
And more importantly, it might be something that you incorporate into your internet safety talk with your whānau.
You may be able to confidently spot deepfakes, but how prepared are your friends and family to do the same?
This is something that you can work with them in order to help develop their intuition.
In case you're wondering, that is a true and accurate photo of a Moa pollinating a feijoa bush. It's not a deepfake.
This isn't entirely about, you know, fun and games and making better quality memes.
It's, again, like I said, it's about training your intuition.
I really encourage you to put into practice what you've learned today, because awareness on its own is not enough. It actually has to be put into action.
The NCSC envisions and works towards an Aotearoa New Zealand where good cybersecurity happens everywhere, all the time, by everyone. We really encourage you to help us make that true.
The last activity I've included in the slide you've already heard, which is the How Exposed Am I? [https://www.howexposedami.co.nz/]
You might be surprised by the amount of information available about you online.
And so I really encourage you to start there.
Despite the fact that Cyber Smart Week happens in October, the point of all this isn't just to tell you scary stories and make you concerned about the internet.
The tool also helps you take a moment to own it and reduce your exposure by telling you specifically what you can do to get your exposure rating down.
[Slide change] Slide lists two web addresses:
https://ncsc.govt.nz
https://www.howexposedami.co.nz/
[Audio: Speaker/Freisi] Last slide I have is another shameless plug, which are key resources, so rather than listing them all out, please go to our websites.
Directly, don't just use, your chat buddy to help you get to them. Click on our links.
And that's all I've got.
[Slide change]
[Audio: Host/John] Awesome, cool, thanks very much, Freisi. A lot of good stuff in there. Just one of the questions that popped in, is there a risk assessment tool that can be shared with folks?
[Audio: Speaker/Freisi] There are a few tools, we can share them, with you. Currently, the best tools we have available are really aimed towards larger enterprises, but we can see what we can point you to afterwards.
[Audio: Host/John] Cool, and tied into that, folks asking for the link, for the Gandalf secret password activity. That looks like it's very popular. Are we able to share that? Maybe we'll send it out when we see the recording, to folks as well, so they'll have that. That looks like a cool little game there, Freisi.
Cool.
So, let's just take a look, see if we have time for one more question there. So how can you monitor what data is being used or shared, by, any of these AI tools there?
[Audio: Speaker/Freisi] Is that for the full panel?
[Audio: Host/John] Yeah, I think, yeah, I think that's open to the full panel, if anyone wants to talk to that one. Yeah, so how do you, how do you monitor, what data is being used or shared, by a tool?
[Audio: Speaker/Freisi] I really hate, giving an answer with an, it depends. So if it's for your own personal use, in terms of, like, what you're putting in, you obviously know what's there. What is flowing back through, and where it's landing, that's a little bit harder to decipher. Like I said before, there's no such thing as, like, a free tool.
If you're not paying for it, you're paying for it with your data, so it's often the case that once you share it.
It just sort of propels itself forward to be aggregated and reused, depending on the terms and conditions that you've signed up for. When it comes to a larger entity, there are tools that can help you with data loss prevention and monitoring.
Essentially being able to understand what inputs users are putting into all web applications, not just, generative AI sites and things like that. So that can give you a view of what users are actually sharing out.
[Audio: Host/John] Cool.
Perfect, thank you. We've hit the top of the hour, so we might wrap it up there. Big thank you to our three speakers, Andrew, Elizabeth, and Freisi, that was amazing. AI, obviously, it's so topical, there's a lot we could cover, so thank you for covering, some really useful bits in your allotted, allotted slots there. As I said, we'll try and get the recording out to folks, next week. We're just gonna, check a few things on our end.
But otherwise, thank you everyone for joining, and happy Cyber Smart Week!
[End]
What to expect
Time: 1 Hour
Presented by the The National Cyber Security Centre's (NCSC) Freisi A and Andrew Lensen, and Elizabeth Asmerom Asfaha from the Financial Markets Authority (FMA).
In this session, we’ll explore how AI works, where the technology is heading, and the powerful ways it’s already being used across industries.
We’ll also take a closer look at deepfakes — what they are, why they’re becoming increasingly convincing, and the risks they pose to trust and security. Finally, we'll offer practical advice for building resilience in an AI driven world, with clear actions individuals and organisations can take to stay informed, prepared, and protected.
Join us for a webinar that brings together expert insight from both academic research and frontline cybersecurity.