Adversarial attacks and defense mechanisms against deep point processes by Samira Khorshidi

Automatic Summary

Understanding Adversarial Attacks and Defense Mechanisms in Point Processes

Hello, everyone! My name is Samira, a soon-to-be PhD graduate from Indiana University Purdue University at Indianapolis and an upcoming member of the Siri team at Apple. Today, I aim to present a brief outline of my research on adversarial attacks and defense mechanisms within point processes. To put this in context, my work explores how elements like accuracy, cybersecurity, and trust coalesce to solve complex problems within trustworthy AI and Machine Learning.

Decoding Point Processes

At its core, my research revolves around making machine learning models understandable, fair on marginalized groups, and robust against adversarial attacks and natural disruptions in data. In simple terms, it's all about fostering trust in AI. The key lies in understanding point processes and the role they play in the arrangement of data.

Point processes, whether stochastic or deterministic, help describe how a pattern might have been generated, effectively explaining the causality behind arrangements. This field of mathematics lends itself to a multitude of applications, spanning from criminology and epidemiology to natural resource management and disaster evaluation. The goal is to understand the underlying cause of data patterns in these applications and how they interact with each other.

From Machine Learning Models to Adversarial Attacks

Machine Learning Models

The crux of my research lies in modeling with deep learning models and zeroing in on their behavior patterns when faced with adversarial attacks. It has been found that the virality of these processes—such as disease spread or crime patterns—could be explained by the underlying network of interactions.

Adversarial Attacks

An adversarial attack is a perturbation added to the input of an AI model, designed to skew its output. This alteration may be imperceptible to humans but can dramatically affect a machine's understanding of the data. Even small disturbances can be enough to mislead it. These attacks pose significant risks, especially when we consider deep learning models’ role in real-world scenarios, such as autonomous vehicles or disease modeling.

Improving Robustness Against Attacks

The quest for understanding is coupled with the need to make these models robust against adversarial attacks. This quest has led to various tactics such as adversarial training, alternative architecture, or using generative adversarial networks. These strategies aim to enhance model performance, increase transparency, and boost the understanding of how these models behave.

Considering the Cost

Still, it's essential to weigh the benefits of these robust models against their costs, from the obvious financial expenditure to the impact on the environment. Large data centers powering massive deep learning models carry a carbon footprint that contributes to global climate change, an aspect of AI and Machine Learning that's often overlooked.

Fighting the Good Fight

Ultimately, the research into adversarial attacks and defenses is a continuous journey, a cycle that requires constant attention and improvement. Techniques developed in this realm have vast applications, spanning customer behavior analysis, promotional campaign management, supply chain, speech recognition systems, and more. Adversarial attacks pose a risk to all these fields, signaling the urgent need to improve and maintain robustness.

To conclude, this quick dive into adversarial attacks and defenses may have brought many new concepts to the fore. I welcome any discussions or queries you may have about these topics or any other aspect of this intriguing field. Enjoy the rest of the conference and the many insightful sessions to come!


Video Transcription

Hello, everyone. My name is uh Samira and I currently, I'm phd candidate at um Indiana University Purdue University at Indianapolis. And I will finish my phd in about two weeks. Anyway.So in August, I'm gonna join Apple to work on Siri and uh like work on the trust related problems with Siri. And originally I'm from Iran, I have been working in industry for about seven years. So I was working as a like a software developer, software programmer, um software engineer, then I become uh became a CTO and then I moved to Germany to study my master in computer science. And then now today I'm here with Hugo. Anyway. So today I will present uh like a brief um research on um adversarial attacks and defense mechanisms against the point processes. And indeed, this is a matter of trustworthy A I and machine learning and in this research accuracy, cybersecurity and trust all come together to solve the problem.

So let's see how and why. So as I said, this research is about uh trustworthy A I and what I mean by that is that the title that just that just I mentioned is related to each wing of this child here. So basically, in this research, we we are making machine learning models explainable. And I'm researching and studying models fairness uh on marginalized group. And I'm working on machine learning models, robustness against adversarial attacks and natural shocks in data, which is a normal like things for the data.

And I will explain some of them here. Uh The whole thing is very technical, but I'm not gonna go through the technical uh aspects of it just uh to introduce the thing. So the reason for that is that what we are doing here is about trust. And I'm saying that because I believe that machine learning models should be like trustworthy and we should have a like a machine learning model that is explainable, responsible for the decision making and also safe and robust. So other thing about is about trust basically and to solve it, I have to first um explain the like briefly explain the two words in the title which is very important and uh it defines the domain of this research. So point processes and what are they? So point process is where we like to define it. I have to say that point pattern analysis basically is a field of mathematics that is studying the arrangement of points in any space so that space could be time or time and like a physical space or whatever space that we define it. And then the point pan analysis focusing uh on finding out why the points are arranging in this particular way. For example, on the left side, why T two is not closer to T one rather than T three, right?

So there is a reason behind that and that reason is something that is cited in point P analysis and it's very important field. It uh it's something that it's broad and it's a, it has application in like a criminology for analyzing the location of crimes and accidents in epidemiology to analyze like how disease are evolving, for example, like COVID-19 in the society and then in natural resource management and for uh natural disasters like flood wildfire, all of those are explainable by point processes and um like in um you know what they are used for describing and analyzing those kind of like events uh that we call them even.

Uh anyway, as I said, the point process is the thing that can explain it. They explain the arrangement, right? So it describes how pattern might have been generated. So basically, what I'm talking about is the causality here, right? So point processes are are explaining the cause behind the arrangements, right? And they can be stochastic or deterministic. If like one thing is happening over and over with like in a deterministic way, the point process is deterministic but if it's random, uh no, now it become a stochastic point process and basically crimes and like a disease spread in society or like a wildfire. All of those are stochastic point processes that they are random but random in some way. And we can explain that randomness in them. And there are like these are the examples. So these points here on the left that you see there are crimes that are happening in Indianapolis, but they are clustered here somehow clustered. And there is an explanation for those clusters in the map. OK. And in this research, we are interested in those point processes that are not dependent from each other.

What I mean by that is there are clusters because of the interaction between themselves or some uh like or third party uh interaction, you know, or something that is affecting all of them. And this is important again because that's what is going on when we have like um uh flu, uh like a swim flu and we have farms close by or we have like uh COVID-19. And the rise in case of uh cases of COVID-19 is because of the underlying cause and the interaction between people in a society, right? This is the same thing for the crimes and earth earthquake aftershocks. So let's see what I mean by that. So here I'm showing you how the points are generating where they are coming from uh like a cluster point process. So basically we have the initial points, let's say that initial peop uh person that is infected with COVID-19, those people are happening like in society randomly.

And then in nearby and near future, we have like secondary infection and those infected new infected pe person can infect others as well. So this is what is going on and that relationship is what we are interested in. Again, zooming on on of my research. So I'm modeling these kind of things with deep learning models and I'm studying their behaviors when there is an adversarial attacks against them. And then I'm studying how robust are they, how they are like robust. Uh and then how we can explain and transparent them. And then towards transparency, I have shown in one of my previous uh publications that the underlying network can explain the virality of this process. For example, the network of people in a society can explain how much how much how many ne like eventually how many cases we have.

And I'm doing that by crafting adversarial like attacks against these models. And then I'm simulating non stationary changes to see. Like, for example, again, coming back to COVID-19, COVID-19 had ef effect on the crimes that were going on in each of the like uh us cities. And I'm understanding those and uh like studying them. And eventually I have like proposed some robustness methods to improve their robustness. And the whole thing is that I'm trying to make them more introvert and transparent in their behavior. And it's important because like the whole field, the point process is, is hard to understand itself. And then when it comes to deep learning models, there is always the risk of adversarial attacks regard regardless of the performance and accuracy of these models. And you know that like the adversarial attacks or the samples or the input to our models, when the perturbation is not sensible to human eyes, for example, these two stop sign images that you see here, they both are very similar to us. But for machine learning model, that tiny noise is enough to mislead it. And then it can say that OK, this is not a stop sign and this is a like a yield sign and imagine like this in a real world scenarios when you're driving your Tesla car and it's on autopilot and then these kind of things are happening against those models.

So it's important and once we um like understand them, they, they will help us to understand how our models are behaving. And then for my cases, the motivation for my adverse attacks are the real world scenarios like variety of dynamic phenomenon including like uh youtube videos, uh views Twitter sharing viral marketing campaigns, the spread of computer viruses, gang retaliation. All of those are the explain like the problems that I'm working with and they are inspiring my adversarial attacks. And again, the other thing that is important here is to see how like the model itself can affect the adverse oil attacks. For example, if a model is deeper how mu how like how the sensi sensitivity will change with respect to the adversarial attacks, you know, or how like if you increase the size of your input, if you pass a like larger image to your model, so how, how it's improving the robustness of your model and some other matrixes as well.

Anyway, for the robustness, there are like some scenarios that people are doing in uh literature and then in real world as well. Like adversarial training, basically, they use the adversarial samples to train their model. There are like alternative uh architectures that people are using.

For example, they are saying, OK, if you use convolutional instead of like a regular uh dense layers, you might have a better like performance or you can use the adversarial a uh generative adversarial networks to improve the robots. And this one is coming from the idea that OK, we don't understand the deep learning models but they can understand themselves. And here is where like we are using them like deep learning models to improve the performance of them uh like the other deep learning model. And uh yeah, the like when you talking about the robustness of your models or your systems, it depends on the like type of the robustness. It depends on if you're saying it's attack specific or attack agnostic. Basically, you have to specify if your robustness is certifying your model against all of the possible attacks or just one single or some uh sort of attacks. And then, and then uh towards the robustness, it's important to know your data. For example, here, for me, my data itself, it's like random, it has a noisy um so deep learning models can over ft this type of data. And then the other thing is that like you have to be able to um measure the uncertainty and sensitivity of your models and but also the the content of tweets in addition to those.

But then in those cases, the adversarial attacks are different, like one might just change the text, some might change the timing of the new events. And these are the things that I have to I had to consider actually, and the final thing which I want would like to emphasize is that costs related to modeling robustness and adversarial attacks. So basically nowadays, we are talking about larger networks, larger networks and like Google, um deep learning models is hu are huge, right? But then no, not all of them are talking about how much like CO2 is produced by the uh Google like a data centers, for example, or how much they are like how they are affecting the environment, the not just the their local environment, but the global like climate change is affected by the the big companies and the cost of their deep planing models.

And to wrap it up, the research that we are working on is the modeling attacks and defenses. And this is like a cycling thing. So I we have to work on each part separately and together as well. And I'm not going through the description, but it's important not just for this case but the others as well like in customer behavior analysis and like a promotional campaign management and supply chain in a speech recognition system. In all of these things, we have like a deep learning models that are very similar to mine and they are all using similar kind of like data and they are all vulnerable to adversarial attacks and we have to improve the robustness. Thank you. I'm not sure if I have been able to like uh convey all of things, but please let's discuss um I think we have about like four minutes so we can discuss some of the thing like the topics that I just mentioned here. Ok, hopefully you enjoyed my presentation and good luck for ours and I hope you enjoy the rest of the conference and other sessions as well.