DANIELLA TRAINO - THE CHAT (raw transcript)

Today we have Daniella Traino, and Daniella has a fantastic mantra. The best way to predict the future is to go and create it yourself.


I love it. Hey, Daniella. Thanks for joining us today. And I wanted to talk to you about how you landed in cyber security. Can you share that story with us? 


Daniella Traino  2:48  

I think you'll find that most women, we just don't have a straight path and I, I did my computer science and commerce degree out of University of Sydney and I felt the world revolved around business. So I landed straight into a consulting gig, doing financial audit, and finance projects and things like that. And, and on the floor, I was also working with some technology people. And and after a while that was having so much fun. I wanted to find out what more they were doing. And at that point, they were breaking system. So they were doing it. But what was known at the time was ethical hacking. So I put my skills to use and started learning how to break systems. You know, if every man his dog at the time was learning ethical, hacking exposed was the book. Now they've built silence and various other tools. So I had so much fun that I ended up staying more in the technology side and doing a lot more technology strategy and things like that. And then fast forward a couple of years and I ended up working in banking and finance did a couple of gigs as chief security officer then found my place in r&d when I felt that I was too much on the Consumer into the tech space. And there must have been a way to be able to design what we really need. And think differently. So I landed in what I thought was one of the most amazing places in Australia, Nikita, which was then starting to move with the CSI arrows and merger and became data 61. And, yeah, I started working in the innovation ecosystem for so many other things that I could talk to. But cyber security was just an exciting place to be even back many years ago, because we were able to find the weaknesses and build stronger systems and structures as a result. And I just I love talking to people, helping them solve problems and doing that with creativity and cyber security just gave me the ability. 


Beverley Roche  4:48  

Yeah, fantastic. I met you when you're at CIRO, and it had just become data 61. And I just loved the way that you thought about how to solve Some of these problems, we had some serious challenges around finding nasty threat actors on the dark net. And we got to work together to try and address solving that problem and worked out that we had some really common threads around how to address this issue. So it was really just fantastic. I wanted to talk to you about data, program programmable data driven software, and this bigger discussion around who owns the data? I know you've got some really interesting views on that. Do you want to share that with us?


Daniella Traino  5:40  

Yeah, sure. Thanks for the very broad question, which I think could take us hours to dissect. So we'll just skim around a couple of things in that vein.Well, it's no surprise that, you know, we're building economies and societies that are data driven and that's been happening. For the last few years, although we've only coined that term or recently, some of those in the oil business now coin as though data is the new oil.I would like to think that it's actually the air that we breathe it you just can't get around it. But it calls into question a couple of things. So whether the daughter is freely given, whether it has the biases of the, the, the owner or the creator of it, can you trust where it was created, or where it was manufactured? Or where it was actually manipulated or managed? So of course, into the whole bunch of questions, but also the fact is, what do you do with it? And just because the data says the sky is blue, does that mean the sky is blue? So it caused him to question aspects of how we live and how we breathe and how we build businesses and how we operate. You can't do around it. I think there's a statistic out There. For every internet minute there is trillions of data being created in every sense. But we are generating data for everything we do, whether that is the data from documents and things like that, that we know and love and have full time or more interesting. It's the human driven data. So it's the biological data that we created that LR DNA sequence, to keep in touch with the Genome Project, or whether it's in our movements. So there's interesting research, which is still very early days, it hasn't yet been proven, that you know, even the way we walk our human gait, the way we expend energy in our body has a level of fingerprint that could determine who we are or how we behave or what that means. You know, we've talked about biometrics and data that we can collect and generate from there, whether it's our fingerprints or whether it's irises. I mean, there was a talk within the cyber security industry a couple of years ago alone, that that would be the new way of identifying and authorizing mechanisms. But that's obviously been fraught with a whole bunch of issues that we've yet to talk about around ethics and the the right to hold that data. But then we ask ourselves, if that's what the purpose of that data is, then, you know, should it be compromised? Because let's face it, if it isn't, it will be at some point. We know more in the future than we know today. When it's compromised, how do we reassert that data that has integrity? And if we've used it to determine our ability to take out a loan, or to assert who we are, then this whole concept of fake news comes into being of Who are we really, can we prove our identity and when that provable point is lost, or is called into question? How can we assert that identity? How can we assert what that data was meant to identify? So it's calling into account really interesting things. And so then I'm going around in circles, because it's just so many aspects to what you're talking about. But if you see what's happening with fake news, which, you know, it goes into question, obviously, the integrity of journalism. And you know, if you see what journalism is like today, you might even question Well, that was a long time coming, or any integrity. But if you see what's happening with fake news, which is really just the old fashioned military tactics of propaganda going on steroids, now that technology enables that, then you can see that the next leap of that is fake identity. And if you can see that as a military tactic or a tactic use for espionage and other techniques, then you start to ask yourself, then how do we build trust into all of these things where we rely on the data be the deciding factor as to whether or not we trusted or not. So it goes into our system that we have in place. It goes into question the system of systems that we're operating in, and causing to question whether the systems that we've built over the centuries and millennia will send us the test of time. Given that we are now effectively generating daughter and Mr. Justin, the daughter is the key to integrity, 


Beverley Roche  10:23  

That is a massive elevator, helicopter view, all that is just brilliant. And I think I need to probably unpack some of it. 


Daniella Traino  10:33  

Yes, go for it. 


Beverley Roche  10:34  

So I think if I'm hearing you correctly, I think we probably just start with something as basic as I think everybody now knows about Cambridge Analytica and how that manipulated the most fundamental right that we have in a society divide. So I think that's one part of encapsulating what you're saying I think the other part of it is that we've seen identities that theft before. But we're seeing it now in a really in a way that very high profile people. Images are being scammed and taken as purporting something thatis really valuable to society. So it's no longer down at the citizen level of we don't want people to have identity theft, and we were trying to work out means of preventing it. But this is really at our political structures, these are hitting places that are influencing the sort of day to day decisions that we make in a democracy, right?


Daniella Traino  11:41  

Ah totally  I think that, you know, if you were a historian of military strategy, you could say this coming so you would see that in World War Two when they did the propaganda drops out of the airplanes over parts of Europe to try to Change the sentiment on the ground of what was either happening either to suggest there was, you know, war had been finished all that the Nazis worse no strengthening their position or the Russians were this or the fascists with that, you know, they're very simple techniques. But like anything technology advances can be used for nefarious purposes and anyone in the cyber security industry would have a long, perceptive view on that. So it presents the question that if you want to influence or take control at a political level, you would try to destabilize democracies and what that means. And we've seen that across the world with the way they've done that using data driven techniques around fake news. And if you see some of the advances which are good for cyber security in computer vision, they're also being used to change the perception Is this really Barack Obama is this really britney spears and these Notable people, and it's very difficult to tell whether that is the real person or not. And I think that's where I think cyber innovation can actually play a key role. Because you look at these techniques and you say, Well, if I'm a person of importance or a personality, if you want to use that term, your brand, what you stand for, and who you are, is how you actually generate your income. But it's also what you personally stand for your values. So I can honestly say that there are going to be needing to be reputational risk services out there to say, you know, to protect those brands, and to provide some sort of risk metrics that says that what you're dealing with is the real deal. And then you can translate that into into business branding, which are things that we've always needed, but I think cyber security has a role to play. I also think that with what we're seeing around the right to vote, Privacy and those sorts of things. You know, if we were to take the other side of it instead of being very much the hamster on the wheel, and obviously being on the defensive side, if we took a proactive view, those are the technologies. And those are the cyber security measures that we need to be strengthening. And maybe that should change in the way we develop these new products and capabilities so that we build more trust into these things so that people can tell when it's fake. They can tell when they're being manipulated, and they have a level of integrity and confidence. So the tough question is, how do we go about doing that? How do we build those reputational risk models and put those guardrails in so that we can start understanding the difference between trust and trustworthiness? The answers are, I think, like Shrek says they're full of layers. And there it's not a simple answer because you're trying to influence at so many points. unlike anything in cyber security that needs to be the economics needs to be in the favor of doing so doing something differently. And and I think that we haven't hit that point just yet. It's almost like you need a crisis to know that you need cyber insurance, or, you know, you didn't have enough enough signal sufficient set cyber security. So I think it's about starting small and proving that these capabilities can add the value that we see generally. So there are already brand reputation services out there. But taking that next leap and taking some of the research that's happening in parallel industries, in marketing and alike, and bringing that into some smaller wins and showing how it can be done, I think is the way to start. On a privacy lens. For example, privacy preserving technologies have been around for quite some time, so they're not new, but they haven't been able to find the scale and they haven't been able to find the right application and the right need. So consumers don't pay for privacy, they just assume they get it like a public health kit. And that's not until you lose it that you realize how important it was. So I think with the GDPR coming out, and a lot of other regulatory discussions, you probably see that with the US, California and New York has just been debating some of that with the right to vote and a few other things. Changing the language about saying that its privacy as a human right, means that we start to look at having privacy by design, and therefore some of these technologies become necessary rather than a nice to have. So, but it's actually about achieving scale and usability. Unfortunately, a lot of these things, they still provide too much friction. I always thought the password was going to be long gone by, you know, 20,019 but will and behold, we still have them. Why? Because the alternatives are still too hard to use, and they don't scare And they have other problems that are unintended consequences, like the biometrics. So I think it's about saying, take a particular problem, find a very simple way of solving it, the making sure that there's an economic case to be used for it. And GDPR I think, for all the unintended consequences, which are pretty poor, if you were the ones who got all those pop ups. In May, you got all those sign here, except the cookies. And but where's the security? If I got more privacy by doing that? It's I think that that's a trigger point for us to be asking different questions, and building some small solutions that can show that you're not, you're not increasing the cost of doing business. You're not making it harder for the consumer to get to do what they need to do. But you're you're making it not why not? But What you're making it a Why not? Not? If question. So you've got a chunk this down unfortunately, cyber security and privacy still seem to be too esoteric. And while the research is here and abroad are doing a lot of really interesting cutting edge research, which in a couple of years could see the light of day into something more application based. It's still seen the barrier to using some of these things is still considered high. And there's a bit of apathy still in the market.So that's why you're starting to see so much reaction. 


Beverley Roche  18:37  

I would say that there's still a lot of apathy. Not at our level, but I think at a citizen level that trade off of convenience and not understanding or reading you know, the privacy, folks need little icons, and the icons need to be able to do you care about it. I you signing into you opting out Rather than 25 pages, because when a teenager or child says, I want that new application, they want it then and then in there, so a parent is under duress to say, okay, click, click now. So I think that's kind of one problem statement. I think the other is that the, the convenience thing is not driving any of this. Because, as you said, it's almost like you need some massive train wreck in order for average consumers to start saying, I completely now understand all the implications of what's going on with my privacy from a health collection biometrics data, you know, some of the things that you talked about before because if you bring it down to the consumer level, so that's what it's all about, really, isn't it? 


Daniella Traino  19:55  

We'll see I look at it this way. And and it may not be Everyone's liking. But I think that in cyber security and privacy we have, we have an economic issue. It's unbalanced, it costs a lot to put in privacy, but it doesn't cost as much to be impacted by it because the cost isn't worn by the person who actually owns the data. So it's too far down the track. So I like to think that there's a role here for government and regulators to really make a difference. And I don't mean by over regulating, but I mean, by making it a question of, if you if you don't need that level of information to do business or to provide the service or to provide that capability, then it should be super expensive to collect it and manage it. So you change the economic model. So for example, why when I go and buy a pair of shoes, maybe it's my hundred pair, but when I go buy that pair of shoes, why does the shop need to know my date of birth, my email, address my full details and address why you're not shipping it to me. You're not paying me you're not giving me Buyer Protection. So why do you need to ask for this? And so as a consumer, if we're just talking at consumer level, I don't feel that I have the not empowered to say No, I will not give you this information, because you have to in order to get your pair of shoes, whereas from that shops perspective, it should be super expensive for them to ask me for that information. Because of all the red tape it requires for them to manage it securely with privacy and legacy and legislation. So therefore, they should be asking twice, do I really need it? And if I don't, I don't collect it. If we start changing the economic model, then people will not be so lazy. And I don't mean just developers. I mean, everyone, everybody in the supply chain will stop being lazy and will actually say if we don't need it, don't ask for it. And if we don't ask for it, then we don't have the obligation to protect it to the full extent. Therefore, you know, it's a win win. 


Beverley Roche  22:10  

Beautiful. You just answered my next question. That was fantastic. Because I was really wanting to understand how do we flip that economic value and model? Now, I'm not completely confident that it's going to be done with legislation. But what else? How else can you achieve that? And I think you're right. I think the only way you can do it is by legislation, by having some economic serious economic impact. 


Daniella Traino  22:42  

Well, it you know, maybe it's it. I'm not a fan of over legislating, right. I'm not a fan of that. I think we start going into a compliance mindset. And I firmly believe compliance is not security. It is important and not sufficient. In my mind, it's Maybe it's just making sure we have sufficient enough regulation to make it clear what the guardrails are? And what are the behaviors that we expect of those who are collecting information and managing it. Right, the custodians that are known it, I mean, depending if you look at the terms and conditions, but the custodians and with that comes a level of responsibility, and that's what legislation should make a little bit clear, what is that level of responsibility? But then I look at things like we've got privacy conditioners, we've got, you know, certain regulators like opera and ethic and all those and for worse or not with and whether you think they're doing the right job or not, do they have enough teeth, because when someone does not apply those guardrails in a way that is expected from a societal perspective, then you should be held accountable. Right. And then the more you do that, the more they feel the burden of responsibility, whether that is financialSociety, they lose their license or social license to operate, the more they start asking, well, it is going to be expensive for us to do this, isn't it? We have to take a real hard look at this. And that changes the economics. So maybe it's not a question of saying I'm, I need to put more legislation and more this is just making sure that what we've got is appropriate. And then, you know, effectively backing it up, put your money where your mouth is, if someone has a breach, and they did not take reasonable and sufficient measures for the information they collected the systems that were running, then frankly, you should be held accountable. And once we start doing that, we start saying, Well, you know, the behaviors we asked you to follow, because we legislated or we set those standards. We mean them, and you didn't do it. And there's no excuse these days


Beverley Roche  24:54  

I think we're yet to see that play out because we've had so many breaches reported. We haven't actually seen anything yet. You know, I looked at the quarterly report, basically. And in fact, someone from the US highlighted it to me this morning on LinkedIn and said, What do you think about this? And, you know, at the moment, what we're seeing is just a dissection of how many have reported, what we haven't seen, is that big challenge of what happens next. If they say they're going to remediate. And they say that they've committed to remediate both the impact on the individual and their own organizations, policy standards, controls, how do we know? And what if they do it again, what I wanted, so we're going to have those kind of repeat offenders and but I think we're just not far enough down the journey of that to start getting the big reveal. Right now we're just seeing numbers and it looks like fatigue. But right now, we're not really unpacking We talked to Graham cluley about British Airways, you know, cost of doing business. But what happens when shareholders start saying, Well, that was a big cost of business? And we want to know why, you know, what's wrong with the way that you're managing data, because it's impacting our shareholder value. I agree with you, though, that that seeing teeth in what we've currently got in place would be really helpful, but I just don't think that we've kind of seen that seen that, you know, and I think it's a challenge so that we have in this industry, because if you were to look at I mean, British Airways is playing out now. So we'll, we'll see how far that goes. And we've seen I think, was the Marriott Hotels, having you know, similar issues being raised to the fall, but if you were looking back a couple of years and you saw the target breach. And you know, that highlighted a number of vulnerabilities that was applicable regardless of the industry that you're in. And it really highlights the supply chain issues. And we haven't even touched the software supply chain, the digital supply chain, which to me is the sleeping giant and the elephant in the room that nobody talks about, or has a strong handle on. And, you know, we'll go there if you want to, but I look at the target and there was the gentleman who's now chief security officer for an industry I won't call him out and he hasn't necessarily give me permission to rephrase him. But he made a really interesting point, which I really do agree with. And that is that if you look at the target breach, and the failures along the way, where they had, they had controls in place, you cannot question whether they were sufficient, but they had a number of controls in place where they could have reduced the impact of Central And I didn't fight for various reasons, they just ignore them. But you look at that, and the implications from that breach was Yes, it was highly costly. It also was one of the first of the breaches that cost certain senior executives their position. They also had a share price implication, which was quite devastating for a period of time. And yet, it rebounded, right? You could argue, did they retain their full shareholder value in sufficient time they did. So for I think it was maybe six or eight months, there was quite a shareholder hit on the price of target being a listed entity, because of the fallout of all of this and the water that came about and, and it showed that they had a loss of faith in the executives. But then again, if looking back even now they've retained that value. So if you would make just an economic argument around, you know, should it be We have invested so much in cyber security, should we have had that capability? Should we been on the front foot? Well, no, because we cost us sometime then both share price and, you know, reputation that, you know, doesn't stop people going to our shops know, did we retain our price? earnings ratio? Yeah, we retained it over a period of time. So we've rebounded from that. We learned our lesson on Sure. So if you're just looking at the economics, this is where I think cyber falls, because clearly an economic conversation is not the right one to have because the consequences of the targets and the British Airways and the various other breaches around the world because of the nature of what they represent the digital, the long tail consequences that we don't necessarily feel all of it up front. So some of those direct costs, we can say, Yeah, okay, we're not we're not on a path for making the right investments or Continuous investments and cyber because they don't pay. But the long tail of it is, well, that informations out there that in that system and other information is out there. What a malicious person to have far more creativity and time at this age. And you and I, are they going to do with that? What is the long game they're playing. And if we're talking nation states, it's a long game. And I don't and we cannot necessarily attribute the economics of that in today's thinking. And if we don't think about the threat landscape much more broadly than those immediate breaches, and OPM is a fantastic case, looking at what information was breached to there, and social security numbers of a vast number of government employees and very, very interesting roles. Then when we're not investing, and we're not protecting the future generations in the right way. We're not looking after Our businesses, our future businesses and our future generations, if we're just having an economic argument on today's terms, 


Beverley Roche  31:06  

and David Lacy talked about this when we interviewed David about the long game, and how all these small events that don't look small at the time, and how they all combination into capturing all this data for a long for an absolutely long game. I'm going to jump to one of your other favorite subjects, which is artificial intelligence. I know that you've been doing quite a bit of work around that. 


Daniella Traino  31:35  

Yeah, look, I think artificial intelligence i think is exciting on so many fronts. It has the potential to if applied, for social good, to transform our lives, whether it's the way in which we look at preventative health where we look at lifestyle, potentially to disrupt the way even metal And the way in which we deal with disease and health. And you know, there's some exciting stuff happening even in Australia in that way with IVF and precision and, and and those sorts of companies life whisper phenomenal ways of applying technology to better society and that a human humanity but like anything you can also understand that it can be used for malicious purposes which, you know, has other applications but with with artificial intelligence, we get caught up in the whole Terminator sort of perspective around artificial intelligence and, and that could be a future. General artificial intelligence at this stage is still seeing a long, long way off. What we're seeing at the moment is more narrow out of artificial intelligence which are the application of artificial artificial intelligence techniques and And, and applications that are very specific. So machine learning is just an aspect of that, and I'm sure you've had other interviewees, I can give you much more technical descriptions of that. And we've seen the application of various forms of artificial intelligence with autonomous systems like the Tesla's trying to move up the stack there with some of the levels of autonomy. Obviously, the military have tried various forms of that in order to protect a lot of them servicemen and women out in contest environments. The challenge, though, with artificial intelligence, is that and this is the conversation you and I have been having offline is that in order to build some of these technology, enable enablers, we're asking engineers and data scientists to effectively codify what it means to be human and to codify what it means Makes human base decisions at scale. And that's not a bad thing. But you know, they're not anthropologist, they're not emphasis. They're not user designers. And that's okay, because they should be working with all of the above and making those decisions. But in actual fact, what artificial intelligence and a lot of the more advanced forms of that with, you know, what they call the adversarial networks, and the GaNS and deep learning and all these other things they don't show you put in your talks is that it's holding up a mirror to what society is today and how they behave, how we think, what we've considered to be polite society or civil society or not. And while these things have always been discussions, and always been thought through with psychologists and anthropologists and sociologists and various other apologists, we've never read sat down and codified them. So now what we're asking is to hold up a mirror and put that down on paper and make those automated decisions. And in a future where computational literacy and those sorts of creativity skills are going to be so super important. If we give AI the right to vote, at what point does the human stay in the loop? At what point does it does it mean to be human? And those those sort of points coming up very much in a military context as you would when you're talking about autonomous weapons and things like that, you know, some of the things that grappling with make absolute sense in other aspects of industries, which was one of the things I found fascinating is that if you give AI the vote, do they make all decisions? Do they make the decisions to kill? Do they make the decisions to you know, move left, kill the old woman on the road? The trolley conversation we talk about autonomous systems, you know? And if you do give them the vote, and what does that do to insurance? What does that do to liability? Can you say that because it had intent, which is not what we had before that it's okay, and you're off the hook. But then what does that do to morality? What does? So all of these really existential questions are, which may or may not have been answered in full, we now need to be making much stronger decisions about that, because we're about to put them in software. We're about to make those programmable. And so you start asking the questions of which a number of Institute's here and abroad and I'll point to some of those if you're interested, but you start asking, you know, what does it mean to be ethical. And if these are programmable decisions, then I want them to be transparent and there are cases where some of these things Our technology used to make criminal justice system responses, making decisions about who gets hired or who gets who doesn't. There's some innovative stuff happening in Australia as well. With I've gotten it wrong, and or at least gotten it wrong to the extent that was expected as an outcome. And the first question you ask is will show me how you've made that decision? And today, we can do that, because we can explain it. But when you make when you asking artificial intelligence based systems to make that they are very opaque, it's very black box. So unless you have a PhD to unpack some of these algorithms, how are you going to know that the decision they made was made reasonably? And that you're willing to live with that decision? And then you ask other questions about well, if they made that decision, and it was different to the way I would have made it does that make it right or wrong? And so one of the interesting things that happened I think it was about two years ago, is I think it was a couple of Google scholars or was it Facebook? That to a point, this transparency and talk about responsible a I like to call it although Europe, the European Union came out with what they called transparent AI or ethical II principles. I like to call it responsible AI, because it's a lot broader. It's about trust. It's about ethics. It's about transparency, but it's also about security and privacy. And if you ask these sorts of questions, but two years ago, sorry, my point was Facebook or Google, they pitted a couple of algorithms together. And at that point, it was just normal run of the mill, you kind of learn these how these algorithms start learning what was transformative and what they stopped. The program was when the AI created something, and you language, and this is where it got scary, because up until that point, the whole idea about what set us apart from algorithms is the fact that way, the creative bunch, and suddenly you have the algorithms creating, eating Oh my God. Now this is, what is it? Not the that horrible Tom Cruise movie? Sorry, I'm not a fan. So my question impossible. It's like this mission impossible sort of wild where you sort of saying, We're the creative bunch, and you guys just do what you're programmed to do. And now suddenly you've got new insights to the table, but you've created something. Wow. This is something that we enjoy something that we can't even ask those is human understand, which is even worse. Well. Now, that's the negative side, the plus side is you need a lot of that. And that becomes really interesting when you start thinking about the adversarial stuff that happens in cyber, and how you need that to be able to subvert some of this thinking. So it's kind of like a bit chicken and egg. But, again, it's we're creating and we're not thinking about what we're creating. So we need to stop and think but you can't stop thinking and not build. So how do we do that in parallel? And that's one of the reasons why I love some of the approach Europe had with the responsible AI principles where they came out and questioning Asimov's principles there around machine machines and, and the robotics, which is turning that on its head but Australia's got a number of Institute's where they're starting to think about AI, there was a discussion paper, which closed in May, where the Australian Government at the last federal budgets put some money into start saying, Well, actually, when you framework for AI, which is all great, every country's coming up with frameworks. Don't forget China's strategy around II they're going to make that and they've got critical mass and if not, use their cultural approach to get it. But you know, there's there's all of this happening around the world. I think the interesting question is that I think we've come to a point in history, where as a society, we cannot solve these problems solely as a nation, we need to solve this as a globe. So it comes to call into question all of these international organizations, they need to step up and truly show collaboration because AI and responsible AI does not happen just in America and expect the rest of the world to follow. You've got to work at it as a as a massive world. Little countries can do their little thing, but ultimately, you want to do this properly. You want to do this with safety and, you know, get the real value of AI you need to work at a global level and have almost like what they call like the Geneva Convention. 


Beverley Roche  41:51  

I was just about to say that does seem to me that, you know, the the simple, simplistic answer is we need a gin. We need a Geneva convention to cover some of the ethics and whether this is good for society. Now, I know that as multi dimensional societies we have different values, but we can at least agree on some guiding principles that help us navigate this AI world. Look, this has just been an absolutely fantastic conversation. Thank you so much for your time. And we'll look forward to talking to you another time. 


Daniella Traino  42:30  

Awesome. Thanks for having me.


© 2019 by Cybersecurity Café. Proudly created with Wix.com.  

Background image credit: Canva