NLP in Fintech: How Large Language Models are transforming the future of Fintech

Jayeeta Putatunda
Senior Data Scientist
Automatic Summary

Unlocking the Power of Large Language Models in Fintech

In today's fast-evolving financial landscape, the fintech industry stands on the brink of transformation with the advent of advanced technologies such as large language models (LLMs). If you're operating within the fintech sector or simply have an interest in cutting-edge applications of AI, this blog will delve into how LLMs are being harnessed to revolutionize the realm of financial technology.

Introduction to Large Language Models in Fintech

The recent surge in the development and growth of large language models has carved out new possibilities for various industries. In fintech, a traditionally highly regulated sector, LLMs offer an unprecedented opportunity to leverage vast amounts of data and integrate open-source technologies, enabling businesses to swiftly meet or even exceed their KPIs.

Fintech and the Evolution of NLP

Natural Language Processing (NLP) has taken significant strides, particularly in fintech. From overcoming the limitations of early word vector models like Word2Vec to adopting state-of-the-art models with trillions of parameters, the recent growth in the field is monumental. These developments empower fintech professionals to bypass the exhaustive process of model building from scratch, instead using LLMs as robust starting points for a variety of tasks, from language understanding and summarization to reasoning and inference generation. Jay, a senior data scientist at F Ratings, shares insights on the visual progression of NLP and its multifaceted applications in fintech.

The Adoption of Large Language Models in Fintech

The emergence of open-source models like GPT-4 introduces a plethora of "flavors," each catering to specific industry needs. There are options like DALL-E, which enables commercial implementation, emphasizing the importance of source training data to mitigate potential biases.

Models like Google's Palm indicate a shift toward generalizable knowledge bases, as opposed to merely understanding basic language structures. Jay emphasizes that modern models have transcended the limitations of vocabulary mismatches and flawed representations by providing a foundation from which specialists in the fintech sector can innovate rapidly.

How Fintech is Leveraging NLP

  • Customer service automation, chatbots, and virtual assistants.
  • CRM optimization by finding patterns in vast communication data.
  • Enhanced credit rating systems through the analysis of qualitative data.
  • Governance and fraud detection strategies.
  • Content generation, including report summarization.

Applications within fintech aren't limited to credit scoring and customer service. With tools like ChatGPT and its open-source variant, Hugging Chat, fintech companies can personalize investment experiences or even generate conference-worthy presentation titles.

Understanding Transfer Learning

One of the key contributors to this paradigm shift is transfer learning. By utilizing pre-trained models on similar tasks or domains, professionals can achieve high levels of accuracy with minimal fine-tuning. Transfer learning, thus, serves as a cost-effective approach, particularly invaluable when labeled data is scarce or expensive to procure.

In practice, this involves freezing the initial layers of a network, which capture generic features, and selectively retraining the later layers with specific tasks in mind. Jay underscores the importance of aligning the properties of the model with the intended use case for effective transfer learning.

A Glimpse into Practical Implementations

The Hugging Face library emerges as a pivotal resource for fintech professionals by providing easy access to pre-built tokenizers and models like DistilBERT for various NLP applications.

Jay provides real-world examples to showcase how models can extract relevant answers from financial documents. DistilBERT efficiently pinpoints specifics such as return on investment figures or reasons behind financial standings from larger textual contexts, demonstrating its practicality in a production environment. Furthermore, the Flan-T5 model facilitates reasoning-based tasks, where not just the answer but also the rationale is desired.

Responsibilities in Model Application

While the boom in LLMs opens a plethora of opportunities, it also necessitates responsible application. It's critical to ensure that the models' predictions are thoroughly evaluated and that a human-in-the-loop approach is maintained throughout. Collaboration with Subject Matter Experts (SMEs) and continuous testing across diverse models and datasets pave the way for deploying the most effective solutions in fintech.

It’s clear that the impact of LLMs on fintech is profound, offering both challenges and extraordinary potential. With these insights from an industry expert, stakeholders in the sector can navigate the path towards technology adoption with consideration, choosing the right tools and approaches for their specific needs.

If you're intrigued by the potential of LLMs in fintech or have questions about their applications, feel free to reach out to Jay via LinkedIn or continue exploring the burgeoning realm of NLP to discover the advantages it could hold for your business.

Are you ready to explore the integration of LLMs into your fintech strategies? Harnessing the power of NLP could be the game-changer your company needs to stay ahead in the competitive world of financial technology.

Video Transcription

Hi, everyone. Uh Thanks for joining. I think we're gonna give a couple more minutes for everyone to join in and then we'll get started. But thank you for joining in the session.I hope you're having a great uh you know, session experiences with home tech networks conference in attending a lot of good sessions. I'm sure I have, I was not able to make it to quite a few, but then I kind of saved them just to watch them later. I can do that too. We'll just wait for a couple more minutes by the time if anybody uh have any questions or, you know, particular concerns or areas about today's topic and I'll be in fin fintech and you want to put that in the chart, feel free to do it. And uh we can, you know, take those questions later on. Mm I think we can get started. Uh And then uh we'll pause if there is any other additional questions that comes in uh in the chat. But uh hello everyone. Uh Thank you again for joining this session today. Uh My name is Jay. Uh I work as a senior data scientist at F ratings. Uh, I'm based out of New York.

Uh You can connect me via linkedin if any questions, uh I'm not able to answer today or there is no time. Please feel free to reach out and we can connect later on as well. The topic for today, uh, that we are covering is like, you know, I'm, I'm sure all of you in the data space have heard about the tremendous growth of large language models. Uh And we will look into uh how that the, the, the expansion of that uh domain happened over the years. Uh But we, I wanted to touch upon how in the, you know, the fintech industry, which usually we see as a very uh uh governed and very regulated industry and rightly, so it needs to be uh is, you know, starting to adopt some of these technologies in terms of saying, hey, how can we leverage all these mass amounts of data that we have and leverage some of these open source technologies but make it in a more safer and regulated way so that we can get to our end goal or KPIS faster versus, you know, trying to, you know, re invent the wheel from scratch every time, uh which may or may not work out, right?

So let's just jump into it and uh um talk through that. So yeah, I'm gonna cover a little bit on the uh the, the N LP. So the trends, some of the challenges what is happening in the N LP, fintech Space, uh how we are using transfer learning uh of L LP in solving some of these use cases and then a very quick sample code demo just to show that how easy it is to like, you know, start getting uh integrated your systems.

I'm not gonna talk about what is N LP today. I'm sure all of you are aware of uh how we are leveraging some of these textual large language data uh uh based uh models in feeding in some of these open source models. And like, you know, trying to understand meaning text information, even using images uh from there. So if you look at into this chart uh of growth of N LP and II, I have one more to show because I ran out of space to kind of keep adding uh um the details into there. You see that in the last couple of years, the growth in terms of the number of parameters, that's the size of the model that were released versus uh the companies that did exponentially grew. And we are not talking about like models with like a million uh parameters. We are like almost in trillions now. And that all happened within the last couple of years uh of research and uh boom in the space. And this chart actually is like, I, I really like the look of it. It, it, it's really messy but if you see the timeline, it kind of shows the different flavors of the similar models like we see at the end, like in the open source GP D four category, there are not one but like almost 6 to 7 different flavors of a similar kind of model.

Some are closed source like you cannot, you, yeah, you can use it for like research purposes but cannot use it for commercial purposes like Lama. Uh But then we already have like a couple of options like dolly uh that were released by data breaks that you can implement commercially by retraining or fine tuning using your commercial uh official or like uh I would say entity data. Uh uh So this kind of opens up the arena of how much can be leveraged from the open source and keeping in mind that we are like uh uh uh uh keeping a note of the, the training data that goes into this model so that we are not creating any bias into our uh uh financial systems and different models that we're building in the FEC domain.

So this this image kind of shows like the kind of task that some of these large language models are able to do and achieve. And uh this is a, a very nice in graph from uh the, one of the blog that Google released about the palm model. And it kind of shows that we started off from like just very basic language understanding summarization code completion. And now we are in a, in, in a phase where we are like, it's complete, like a general knowledge or like a complete knowledge base that you can live, which to kind of get started, kick started versus trying to build everything from scratch. Uh I'm not sure if you remember the time, like when we were building word to vec models from scratch and they were like so prone to errors, they were out of vocabulary word issues. We are in an era that have like, you know, com completely taken out those challenges and have completely created a different domain uh uh of uh of a of a baseline model that we can get a jump start uh from. So one of the uh very recent releases from Google uh that is a very, I would say a powerful model if you have like a good amount of data set and you want to build uh an application service that can do summarization question answering uh can also do inference or even like inference on reasoning that you give it multiple cases and say, why do you, you think that this is true?

So it can also find out those reasons and logically attempt to answer from the text context that you have provided, right? And we will see some examples uh later on uh here uh Flan T five is an instruction tuned. LLM uh When I say instruction tuned, it means that you have to give multiple inputs. First input would be your context. Like in this image, you see the text input that we have provided is that uh you know, uh could Geoffrey Hinton have had the conversation with George Washington? That's the question. So that's the text input that you're providing. And then there's like instruction tuned. Uh and you are asking the model based to answer based on the instruction that you're providing, that give the rational before uh uh like answering. So its steps, it creates that step by step answering process and can guide you to uh get to that stage uh sooner uh to get your answer generated versus trying to build, you know, from scratch like uh A N er model or an extraction model. And then uh kind of trying to loop that together as that. Why or what is the reason behind it? Just one application I also wanted to show some applications from the Google Pump uh model. And what are some of the features that it can do?

Like counterfactual Emoji game conceptual understanding as well as the synonym game. So you can ask various prompts based on those various prompts, whatever responses or options you would provide uh uh from in the context, it would be able to identify the uh correct response based on your prompt.

So just uh uh very quick example is that from the Emoji movie game example, you can see that the prompt is what movie does this emoji describe? There is a robot, a cockroach in the world and then there are multiple options and then the model shows wall E as one of the uh better fitted answer. Why? From the context again and from the prompt again? Because in the prompt, we provided robot as being a, a synonymous word or, or one of the contextual factors that the model would uh take into consideration for answering. Right? Great. So we're gonna just go very quickly into uh some of the applications of N LP that I've experienced or seen in the fintech industry. Uh Before that, as you all know, I'm sure you've all played around with Chad GP D. There's also an open source version of Chad GP D that hugging face recently released called Hugging Chat. So feel free to uh explore it's justified first iteration, feel free to look into that and like, you know, play with it and see how that works out uh against uh chat GB D.

So I was just asking like, you know, highlight some use cases of N LP in fintech again, the same concept that we're talking about today and like, you know, make look like, make it look like a conference title. So if you see you see that I got five outputs that talks about uncovering financial fraud with N LP case study in fintech, personalizing investment experience. A fintech approach. It looks very formalized standard but talks about like, again, the use itself as a country. I it's always fun to play around with it and see what kind of, uh you know, uh input uh pointers it gives us. Uh And uh, yeah, and you can just start, start to see how you can incorporate it in your daily work that you do or like any other perf personal uh side works. You do another tidbit is that you can also create your recipes through it. I tried and it definitely works so great. So some of the uh core areas are like, you know, customer service, building chatbots, virtual assistants, uh CRM optimization, like when you have, you know, thousands and thousands of sales end to end sales communication channels uh that are not centralized.

How do you create value or find patter as to why you are like maybe losing a client? Why is the G ratio something like this? So you have to create that contextual parameter that using LL MS can be very useful and help helpful in this uh scenario. And then there's also credit rating uh like you quantify matrix to kind of identify uh analyzing qualitative text and give it a weighted as to the types of matrix that we try to uh uh tag it to. And then of course, there is governance fraud detection as well as content generation building executive based summary uh report generation abstractive summaries. And that is why when I mentioned FT five and 95 is also an abstractive based model. Uh which means that not only can it extract information from your content, but it can also generate its own. Uh uh I would have to say that it's not as uh stronger, strong as like the G BT based uh uh models. But, and rightly so, right, because it's much smaller model, the purpose is not that, but you can still create like, you know, one line summaries. If you have five paragraphs, you can create like a topic of what the five paragraphs are talking about. And like you can create like a multi line summaries as well. But just to make sure that we fine tune and like tone our um model parameters so that it doesn't start to hallucinate and uh write incorrect uh information as an output. Great.

So how are we doing it? There's of course, machine translation, conversational A I intent classification knowledge, uh tree question answering all falls under that bucket of conversational A I there's natural text generation as well as uh legal document understanding where we are trying to, you know, understand what kind of entities are mentioned.

What are the relations or what is the, you know, graph of knowledge that we can identify between multiple entities or for like say a particular topic um that you want to explore. And then there's also text based applications like I mentioned for detection, threat detection, et cetera, which are very uh common uh in uh quite a few of the industries that I have uh worked and seen in um this concept before we go into the code and actually talk about the models.

This concept of transfer learning is something that have made this shift uh in the in the industry, in the mindspace of all professionals as well as uh researchers. Is that how can we leverage that have already been prebuilt on some data? Use those weights, understand their uh meaning and then apply it for our data set. If it, if our data set and that data set that it was worked on uh uh uh matches up correctly, maybe we can leverage without even fine tuning just using off the shelf. Versus if your data set is very, you know, distinguished has a nuanced understanding or a meaning to it like legal documents or uh I don't know patent based applications uh uh those kind of documents, then you definitely need to have a, a fine tuned version of it. But you can still utilize those prebuilt weights and learn from them as your knowledge, general knowledge database and build on top of that versus starting from scratch. So when to use transfering, like I said, if there's a scarcity of label data, like you do not have enough to start training your neural network or big LLM models. So creating label data is definitely expensive. You need uh like SME S and experts that uh uh needs to spend time to kind of create a good data quality and that may or may not be enough to get you to the model uh KPIS that you are targeting for.

So in those scenarios, and there's like a pre trainin model with a huge data set that already exists. And uh it has the same kind of properties to the uh task that you are trying to solve, right. Uh And we will see why. So this is the reason why. So it can either be, you know, same domain or different task. And in this case, I took an example of an image because it's like a little easier to explain is that the tasks are different because you in one, the first task that it was trained on which is t one. Um It's like talking about only classification of general object labels. But then your target task is that you have to identify urban versus rural, which is the basic domain is same that image, it's, it's image based problem, you're trying to distinguish objects. But then uh the task is different. The another variety or flavor of it is that the do domain is different, but the task is the same here. The task is translation. But uh in first one, you are doing like, you know a po uh the sorry, the task is P OS tagging which is like the same for both the use cases. But why the domain is different?

Because in the first uh a pre trained model that was trained on was the source language was German. But you are applying it to an English document for infer in that case, the the task gets uh the the domain gets different. Um And like I mentioned, like if your base task uh parameters or properties are similar to what you are leveraging for your target task, then of the shelf pretrail models can be definitely used as feature extractors. What you need to just do is like, you know, uh keep like freeze all the weights of the previous or early on layers. Like in your network, we have multiple layers, right? And it's similar concept in LLM we have more multiple layers of weights when we learn about the particular data format or like you know what are like the baseline embeddings, what are like the associated uh uh embedding maps in association with those embeddings. These are some things that will not change uh too much. So we freeze all those weights of those model layers. And then uh during training, we just replace the last layer to change it up for your use case. Like if you want to uh utilize like say just an example like a distal bird model, but you don't want the downstream task is for not for classification but for Q and A.

So you change the last layer to kind of update on that or if the data sets are completely different, uh you have nuanced data, like again, legal data set or uh different kind of uh patent based applications that have very strong uh uh uh languages that you wouldn't find in very generic uh uh documents uh on which these I MS were trained, like blogs and Wikipedia, et cetera.

So you, we kind of do off the shelf plus augmentation, which kind of means that the initial layers, they capture the generic features, they still remain the same like uh the meaning of the words wouldn't change that the model has already learned, right? But then for the later layer, we focus more on like the specific tasks and we selectively retrain the models with our newer data sets and that's the process of fine tuning, right. So we have well defined targets. We want to like maybe do a Q and A on a legal data set and we are building or fine tuning on top of like say an example, a distal bird model that is already trained on a Q and A task. But we are adding our company specific or your company specific data to make it more valuable and uh uh uh and accurate in terms of understanding the wordings, the nuances, the differences between different concepts in a much better way. So two options use off the shelf as well or you can do off the shelf plus augmentation. This is just a visual representation to show the exact difference that I was talking about. In the previous slide here, you have like you know, this entire uh feature learning that happens in the first train.

Uh first stage from all the training images, you have the source task to uh classify if it's a cat tree or a lake and you see all these convolutional layers and fully connected layers. What happened and transferring is that uh you have a different task, which is the training images from urban and rural images. You keep C one to C five frozen because you are not touching them like actually C one to F seven frozen because those are very similar to your input task and you don't want to change them at all. But then you add a new layer uh which is FC eight E that you see here is that where you are adding that last final layer to identify now as urban versus rural versus doing the object detection from the pre trained uh last layer uh model. So that's how we kind of learn from what the model has initially learned and then utilize our uh sets of data to identify or update the last layer so that we can use it for our use case. Um Great. And why do you want to do that? Like what's what's like, you know, the uh go or what's the benefit of it? So definitely it gives you a higher start like better initial skill of the source model. Uh Before fine tuning, you're not starting from scratch.

Like I mentioned, if you remember building do VC and work to VC models from scratch, initially, you would have like an out of vocabulary problem. There were like situations where models wouldn't understand or represent embeddings correctly because there are not enough instances or examples of uh commonality of the sentences that you are passing on to the model and it would not understand that relationship. Well, so this gives you a better start or a better initial skill of the model. There's also higher slope like higher learning rate. It is able to learn uh faster using a small number of training data set as well as higher accuracy. And it's definitely fast and cheap to kind of train versus, you know, um uh just to give you an example, like uh there were cases where I have fine tune models on maybe 2000 per label data set. And it worked out with like a very good number of R RT S the be we met the benchmark for accuracy scores. And just imagine if we had to build it from scratch that would have required like, you know, I don't know, like thousands of uh uh per label documents just to create like, you know, baseline model.

Uh Some of the applications are like, you know, information retrieval uh uh that this is part of the Q A model. It may or may not be the only um application, but just to give an idea that the Flan T five model that I was talking about before, if you give a context out like uh a context from like a paragraph or Wikipedia that you're processing, and then there's a question that you want to ask and the question can be a free flowing natural language uh question, right?

Um uh And then the goal for the model is to identify where uh what is the right sentence that had the correct answer and what is the exact answer? So with 25 we can do both, we can identify or we can ask you to um give uh give a prompt saying that, hey, write one line summary of the correct answer and um uh it can do so for us. Uh And then it extracts the same um correct answer from the text as like, you know, particular uh extractive text value. So let's look at a sample Q A. Uh I'm sure all of you have like, you know, explored uh hugging face uh library. They are like, you know, the epitome of making uh A I accessible. Uh They have like two like a lot of good uh good models like up there, there's data sets, there are spaces communities that people are every day using and uploading their models for. So you have, if you have not explored, I would highly encourage to kind of look uh into hugging face. Uh And you can like, you know, download all these models uh or large ll A models from hugging face for your uh test cases. Uh So what we are doing here is we're just installing like transformers, we're installing torch and then from transformers, we're calling in Distill Bird tokenizer, which is like a prebuilt tokenizer from the distilled model that was trained.

So that means that whatever data that have gone into um the distilled models training, uh whatever it has learned about the tokenizer TBD process is saved there. And we can utilize that to also tokenize our um mm uh on our data set. So we don't need to create a new tokenizer, right? And then there's this bird for question answering, which is the final uh modular head uh for that model. So we're calling in uh tokenizer uh calling in from the pre trained. And you see there are multiple flavors of the bird model. There's bird base uncased, there's a base case that means that the data that it was trained on have like, you know, uh uh case sensitive data. That means like if it's a noun, the first letter will be a capital letter, nothing is like, you know, turned to a and in case of uncas everything has been turned to uh lower case. So for some use cases that can be very valuable as well because uh we're trying to, you know, do maybe any R extractions and we would want to retain some of these nuanced uh embedding understanding into the uh models uh understanding uh weight its. So we are calling both the tokenizer and the model. And then uh uh and, and, and then that's it, that, that's all you have to do to kind of, you know, uh get the model to an inference stage.

And uh the next part that you see here uh ta talks about the same like a context that I provided from like say a very generic um uh document uh from Apple's uh sustainability or I think it was an uh annual report. And it was talking about how Apple Inc you know, achieved return on average it invested on some of these areas, et cetera and ro I improved. So and this context can be longer. I think this bird has a context length of uh 512 tokens like 512 tokens. We can break down bigger chunks of context into smaller chunks, create that linkage. So that if the answer exists in a different chunk after you have uh uh tightened or you have uh made the split of a bigger context, we don't lose um you know, the vital information or like we don't lose information half here or and half in the 2nd 2nd context parameter that shouldn't happen, right?

So after just providing this context, I ask a very simple question that what did Apple achieve. Uh of course, there's like a small function where I am calling uh the model, the input tokenizer and then just, you know, uh asking to extract the information. So it goes into qat one which you do not see in the slide, but the output uh that comes out or the complete answer that comes out was that um uh return on average invested assets on 4.31% in fourth quarter. So why did uh Apple, what did Apple achieve? This was their achievement that they achieved a return uh on average invested? So why did uh Apple's ro I improve? Again, the the last answer is like because they had a net income growth. So you see, uh since the still bird is a more extractive based model, it is able to, you know, identify um uh very um clearly the area of the context that answers your particular question. And you can test, test any kind of uh you know, context information with more numbers, with more numerical data uh or like, you know, or all the questions can be more open ended as well. One caveat to that is that uh for answering open-ended questions, it can get a little tricky uh because uh if you're asking to write like uh did it do it? Yes or no to answer that bird is not prepared but needs to find first.

Uh this bird first need to find the answer and then preprocess and you can have like maybe a so you can still work with different flavors of question type based on your business use case and leverage this to kind of have uh uh uh and have like a post processing uh output uh clearing format just to make sure that it aligns with your question types, right?

Uh One other example was like, so what is the status of total ranking? So status of total ranking deteriorated again. Uh In some use cases, we see uh that identifying only this one keyword or deteriorated as the answer may or may not serve the purpose. And you would want, you know, more um uh uh associated context uh to support your answer of why it deteriorated. So in those cases, LT five would be a better model because uh you can, you know, add prompts uh and say that, OK, write one line about why the status of total ranking. Uh what is the status of total ranking and give a reasoning for your answer. So in that case, SLT five, like I mentioned before, since it's an instruction tuned model, it will be able to better identify um and give a logic to why it ended up with that answer, right? But the silver being a very small uh user friendly production, friendly and deployment friendly model uh in instances where uh we can quickly deploy it, it definitely gives you an edge to make sure that uh you are starting up uh setting it up and not wasting too much time in R and D or trying to set up those prompts, standardize those prompts.

Because prompt, like I'm sure all of you have heard about this term. Uh prompt engineering, it's definitely a, a crucial thing because changing in the semantics of the prompt ne uh uh uh like nuances of how you frame the question can, can affect the outcome of the answers, right? So we need to be very careful, especially when we are applying it to uh uh a business use case and the end users are going to see the results. So you don't want to be creating unnecessary friction there. So, but uh this Elbert can still get you there, identify the location, exact answer and then you can utilize LAN five maybe as an associated next step to kind of identify more context or more information so that you have two answers to validate and then create like AAA chain of model answer uh with the downstream task at hand, right?

And then uh again, just to show that it can also identify uh matrix. Uh is that like how much the serial, the total ranking just deteriorated. It's like 56 to 60 which we saw in the context before that it deteriorated. I think the text is missing from the screenshot. But yeah, uh that, that, that was the answer that was correctly taken into account. Great. So I think m my ending note would be, I think uh since we are seeing so much of a boom in the LLM space with so many different kinds of models and flavors that's coming out. It's, it's very interesting to see that how, you know, uh leaders and business uh uh representatives are finally like being aware of that. And the charge G BT I definitely helped kind of open up that uh discussion domains. But we also need to be very uh conscious in how we promise. And we shouldn't like, you know, over promising that here this complete workflow that was happening manually and it can automate it and it will like, you know, reach human intelligence benchmark. That is uh uh at least in all the use cases that I have seen is like a super over promise and we shouldn't uh go to that space at all. So it is definitely tricky.

We need to make sure that the model that we use, the data that we are using is highly evaluated and there's a human in the loop for Q A at all stages, it helps in keeping the data, check the models output at check before pushing the final results out to the end users, right?

So uh like always review the use cases, have like a better business sense working with SME S. And most of the times we have seen like, you know, open source is huge. So uh we get confused as to which model we have to use. That again, comes down to testing, testing different flavors of models, testing what kind of data sets you have and then finally getting to that final uh answer of what works best for you. Um Right. And, and I think that's, that's all from my end that I had. Uh I wanted to keep some time for the end if there's any additional questions, uh if you have any questions later on or uh you want to reach out later, you can feel free to reach out. Uh And yeah, I will uh stop there and see if there's any questions I can answer uh specifically or, or this was great. Thank you for joining in.