Lessons learned from analyzing 60 billion lines of code: how to build maintainable software architectures. by Vidya Thangirala


Video Transcription

Hi, everyone. Welcome to the session. So it's 910 Eastern here um based in New York and I'll get started. So the session is about building maintainable software architectures using metrics. And my name is via Angula.I'm a software engineer consultant at software improvement group and my background is in computer engineering. I have been in the industry for over 15 years and I started my career as a software developer and then moved into management roles and I call myself a consultant. So I'm based in, I'm from Canada, originally from India, but um I did my university in Canada and I'm based in uh Brooklyn New York. So in this session, I hope to share some insights and some practical advice around building maintainable architectures um using measurements. So I hope you find it useful and uh uh enjoy the session. So I'll get starting. So uh specifically, I'll be looking at industry trends and challenges and see what we see in software build quality across different industries and what maintainable architectures are and also why they're important specifically from a development standpoint, we know that architects and management would like that.

But how can this help developers and how can you actually use measurements in a practical way to build them in your own environment. And I hope to balance theory with some practical examples by looking at a case study of an open source system that you might have heard of called Haje. And finally, I'll briefly touch upon how you might prioritize architectural effort in your own environment. So, and I'll leave a few minutes at the end for your questions. So go ahead and write your questions in the chat window. I won't be able to see them during the session, but I'll go through them after and answer any questions you might have. So OK, let's get started. So, first things first, what is maintainable architectures and why does it matter? So, I'll start with one of my favorite quotes um from James Copeland who's a writer, lecturer and researcher in the field of computer science. Oh, sorry. It's uh here we go.

Um And this is what he says. He says, architecture is not so much about the software, but about the people who write the software, the code doesn't care how cohesive or decoupled it is. But people do care whether or not there's coupling with other team members within the organization. So the point here is that good software architecture benefits developers. It's the people who are writing the systems um where it really helps. So if you're a developer, good architecture should enable you and your teammates to have a shared understanding of a system design at any given time, not just at the beginning of a planning phase, it should also help you contain. So any sort of software complexity as you can manage it effectively as your system is growing and as it evolves, and it also should be able to isolate parts of problems from another. So you're able to work on individual areas. And finally, it lets you maintain freedom to get things wrong and have the ability to learn and recover from them quickly. So what does this lead to from the software development? I'm just going to take a quick look at the chat to make sure you can, as long as you can hear me if you can let me know. So ultimately, we've seen this ultimately lead developers to be able to deliver business value in the future to the customers.

So we've all seen this business goals, which is about greater flexibility to develop innovative features and to push out um customer features to uh uh customers in a faster time frame and also be able to scale customers um seamlessly and overall over the cost of development and maintenance.

So, underlying all of these business schools are the holy grail of software product quality, which is maintainable systems. And what does that mean? So essentially um when you think about soft maintainability or architecture, it's any piece of software, you have to decide how it behaves.

So software is often only seen from the outside say functions or U I which is the top of the iceberg in that picture. But it's actually here's a list from um from ISO uh a formal approach that talks about nonfunctional software quality attributes, the majority of the work and complexity is actually the nonfunctional work. So these things like security, maintainability, performance, res resilience and so on.

These are still behaviors that you need to design and develop in our system just like functionality. But unlike specific features like a shopping cart, you can fix security or maintainability or resilience in a single place in your system. So architecture is essentially a collection of cross cutting constraints or behaviors the developers agree to implement so that their system behaves in a very positive way. So so now that you have just a very high level understanding of what it is, uh what we find is that let's look at some trends and we find that designing good maintainable architectures is genuinely difficult for complex large scale systems. And I'll check a few trends that we're seeing. And I'll also share some, I'll formulate um some context on how we got to these trends and also what we can do to help out if we're working on large scale systems. So I'll start with the context. So at software improvement group, we take a benchmark approach to help look at software build quality. So our database spans more than 13 years of measurements including more than 7500 enterprise systems and it aggregates build quality of more than 70 billion lines of code and in over 300 different technologies.

So each white dot that you're seeing on that chart represents a system that we've analyzed and we've measured, monitored and captured the essential relationship between the systems. Attribute, things like o duplication, complexity entanglement, and it's build quality over all, build quality and came up with a benchmark chart.

Um And what you're seeing on the X axis is the volume and lori logarithmic scale. And we, we update this every year. So in general, the trend that we see is that as the volume of the system goes up, the quality of the software goes down and volume here um is in the in the X axis as I mentioned and build qualities and yy axis. So that's because larger systems tend to become very complex and hard to maintain over time. And this has a direct relation to the number of defects or issues found in the system. Let's take a look at the next trend, which is this hard, might, this graph might be hard to read. But um overall what we're seeing is we've also analyzed it over different sectors and technology stacks in various industries. And what we found is that in each industry before at least um 50 enterprise systems um to be able to call it uh statistical data here. And so what we found is that all of these sectors are stuck in legacy systems. So these are large incumbent systems on average three or four times as big as Java or microsoft.net systems. And they still full, full essential functions within their organizations and are hard to replace.

But what we see across all sectors is that their build quality consistently scores well below market average, which implies that there's a huge risk or even higher future risk for cost and overall difficulty of keeping these systems up and running. So now I'm going to focus on specifically the architecture to see. OK, what are we finding? Um why is this difficult to maintain and what are we seeing in um legacy and large scale systems? So some of the lessons we've learned from looking at a lot of these systems is that first large scale software is typically very complex, highly coupled and difficult to maintain knowledge is typically thinly distributed across the it organization. It's hard to navigate dependencies between systems or services and because of the volume and but there's a constant need to evolve across various technology stacks and modernize them. And also the design of how it was originally designed in terms of architecture, makes it very hard to make incremental change. And there's generally a limited knowledge overall in all subsystems. So what we're finding is that it's not just the architectural characteristics of the system, there's an issue, but there's also a knowledge issue and how um and how it evolves over time. And so it's both the people and the processes and the technology itself.

So let's see what we can do about this now that I've sufficiently scared you about all the issues that we find in large scale systems. There are some things that we can do to help build maintainable software architectures. So which really helps in the long run and these are practical approaches. So, so they there's and I'm sure all of you know, there's lots of different architectural styles out there. And I'm not going to talk about what you need to use because there's no generic one size fits all best architecture. The best architecture is the one that solves your problem effectively in the foreseeable future. So, but what I'll focus on are specific specifically using measurements that can help your architecture stay intact and manageable and you can um evolve the, evolve your development work and manage the complexity as it grows. So here. So it's not just legacy, I mentioned large scale legacy systems.

It's not just legacy systems that have issues, it's also large scale modern systems. So you might be familiar with the dreaded ball of month um which is this is a real visualization that you're seeing there in the chart of Uber's ecosystem of microservices. So there are modern staff and using modern architecture style and they still have to worry about badly tangled code. So this is bad for developers because it becomes very hard to test and deploy software independently along with all the other issues that come with large scale systems, creating high complexities due to dependencies and coupling between components and services. But regardless of the type of architecture you you choose in general, there are some characteristics that we can look at when designing for measuring maintainable architectures. And I'll take a look at that next. So earlier I mentioned about a formal approach to maintainability, which ISO provides certain architectural qualities that we can measure. But ISO doesn't provide a standard for uh maintainable architectural qualities. So say like it does for say performance. So at si G, we've developed an architectural quality model and categorize them in elements you can measure in both the static and evolving or what we call dynamic code based.

So this is based on our large data set of benchmark data of enterprise systems and open source systems in years of research and analysis. So the qualities are first, I'll start with the top, which is structure, which is the arrangement of relations between the parts of elements of an architecture. So how the uh overall structure of the um system is at the at the top level to the code based level? The next is communication, which is the complexity of imparting or exchanging of data throughout the architecture. So how easy are you able to share data between services and the next is data access, which is essentially data coupling, which is saying, how easy and efficient is it to access or retrieve data stored within a, a data store or repository in different parts of the system? And the next is the technology usage, which is the degree to which technologies are common and standard across architectures. Um Say, for example, if you, if you're working on a Cobalt system, you're going to have a hard time finding people that develop in Cobalt in the next 10 years. Um But if you're using Java, a modern technology, Cle Java, you might be at a lower risk architecturally.

So it also involves people and the technology that you're using. The next is evolution. And this is probably what most people think of when they think of architecture, which is the degree to which changes can be made in isolation across the architecture. And the last but not least is the knowledge, the degree of technical knowledge distribution amongst the team member within the organization. So now that I've talked a little bit about the the high level quality attributes um that we've categorized and we've tried and tested this and it works really well. I'll in the rest of the session, I'll take an example of an element that impacts one of the architectural qualities and walk you through a practical example of how you can measure it and then maintain and continue to measure it and maintain it and monitor it to create a healthy ecosystem.

So let's take the example metric um that I'm going to use that I spoke about earlier about coupling. So this is a common problem in legacy and large scale systems. And um this is essentially that big ball of mud. Uh So let's let's see how you can use this metric using a case study. And this impacts many things. But one of the things that it does impact is evolution of your architecture. So if you've got a badly tangled code base, it's going to be very hard for you to evolve that architecture style to add more features or add more services because there's too many dependencies across. So the C oh Sorry go back too far, we go. So let's look at a uh Apache Hadoop project. So this is a large scale open source system for proprietary reasons. I can't show any of our actual um enterprise systems that we look at. But um uh the hadoop is a very good example um that I think uh will uh highlight how you can look at measurements and how you can continue to monitor them. So at a very high level, Hadoop is a framework that enables processing of large data sets which resides usually in the form of clusters. And it was built for big data and it's made up of several different modules that you see on the left hand side.

It's a high level view and it's also powered by a community of over 400 open source developers. So this is, it's got about 1 million lines of code and that's a very large ecosystem. Um And, and it's also primarily Java, but it's got other technologies as well. So this is what we call a large scale system with a lot of developers and huge lines of code and lots of different modules in it. So let's take a look at the overall system quality here. So when we look at it from just purely uh build software build quality, we see that Hadoop is on the larger end of uh of the in terms of that scale of volume, but it falls also within the industry average in terms of overall software build quality or what we call maintainability, but it is on the lower end of that spectrum.

So this just gives us a high level view. But now let's take an architectural quality view of um what this system actually looks like and what it does internally. Hm So when you look at the architecture quality here, what we're seeing is a rating based on our benchmark data set that I mentioned earlier and the quality of individual attributes um also that I shared, shared. So to keep things sim simple, I've put them in these risk categories um from that are essentially saying something's very critical and the needs to have um immediate attention to very low. Um And this goes from red to green. So what we're looking at. So when we see this, we see that there's a couple of um quality attributes where it's doing well and then the system seems to be average in another couple and then there's a couple in the higher risk category. And specifically, that's the evolution and knowledge. So let's pick evolution and take a deeper dive into what that looks like and what we can do about it. So before I get there, I want to just focus a little bit on um evolution, that particular quality that I mentioned evolution and also a specific uh metric that we can measure within a code base.

So evolution is essentially the degree to which um a developer can make changes in isolation across the architecture. So one of the measurement we can take um is to analyze the structural dependencies across components in the system. So this is called component coupling typically in the industry.

Um So a dependency is a relationship between two or more modules or components within a code base which are either directly or indirectly dependent on each other to function properly. And so why do we need to look at this? Because tightly coupled systems are very hard to evolve as because if any co changes, it may significantly impact um the rest of the system and it's very difficult to maintain because again, any changes to one component may or may not um require change in another component. And so it has a global risk. And also there is a risk of a component being a single point of failure, especially if it has a lot of dependencies um across the system. So our goal here is to limit the dependencies and tight coupling across components. Uh We can fully eliminate that in a system that's not practical, but we can monitor it and limit it so that our system can evolve in easily and independently and it avoids any structural erosion over time. So I hope that that's a, that's a good example of what a metric would look like that you can actually measure. So the next piece here is let's look at the architecture quality back to Hadoop of by the component. So this is a visual of um measuring the communication calls across components which are causing explicit dependencies within that system. So you can see this um there's a lot of different communication calls going and, and each box that you see represents the size or the lines of code.

So the larger the box, the more dependency generally there are. And what we want to do is we want to clearly define and limit the communication lines between components. And we want every inter component communication uh to add complexity to the architecture and make it harder to make changes in isolation and extend the architecture, right. So we want to be able to uh contain the complexity within a system for maintainable architectures. So let's zoom in on to the largest um uh component that you're seeing there, which is a P on project. Ah So when we look at this, what what do we see? We see that the yarn component is, is at very high risk of being dependent on other components directly or indirectly. And I've put again the risk rating in um to make it simple in red to green. And so we see that that component has over 6000 external dependencies, that means 6000 calls or some sort of implicit or explicit dependence that is that it has. And this means a change in one part of the system can break something else in a completely unrelated part if you don't know what's uh what it's impacting.

So we can actually measure this in the system practically and provide an objective view of how healthy is the architecture of the software is at a deeper level. So we can actually fix it. So let's say we've looked at this and then now we're saying, OK, I'm able to detect this issue and I've measured this and I see that there's high coupling here and I'm a smart engineer. So I'm going to have access to all types of resources. So I'm going to figure out how to fix it and we fix it. I won't go into the details of um how to fix it, but there's lots of rich information on patterns and principles. And this is where uh industry best practices and principles help you with um resolving issues like this, where these are common problems that others have seen. And they've actually written books about it on how to fix, how to reduce high number of component dependencies to make your code better. So it helps overall to test and understand and maintain and not to mention we use your code. And then at a later time, that's a great book I recommend here is the agile software development principles, practices and patterns.

Um uh If you want to learn more about how to fix some of these issues, so now let's say we fixed that and then we're, we think we're happy Campers. And then, but then we need to, we, we've decided that we've fixed uh dependencies, but we can also continue to look at metrics. There's lots of metrics that you can look at, but there's, it's important to look at metrics that are valuable to you and that are meaningful. So I talked about explicit dependencies before on um how one component can be related to another based on communication calls. But let's look at another metric that also impacts evolution, architecture, evolution and that's called code we use. So a code reuse is essentially the duplication of code across components in a system. And this can be measured based on lines of code. So duplication within a component or module is something developers need to mitigate at a code level generally to keep clean code but duplication between components results actually results in higher coupling and creates this thing called implicit dependencies between responsibilities in the system.

So our goal here is to reduce excessive duplication across components, which can create like I mentioned, implicit dependencies and coupling and makes it hard to go. So let's look at um our APR N component and see how it's doing. Ah So we see here now we, if we dig a little deeper again, the colored bar on the left is what you're saying, where the red, red um dots are you saying is the lines of code um that's duplicated across components and we see that the, the D pr M component, there's about 70% of its code base is actually code is actually duplicated in other parts of the system.

So this creates implicit dependencies and other um issues that might come up. So if you're changing code in one part of your system, now you're going to have to figure out where else it's happening and change that part of the system as well. So, and here we see say between er and mapreduce, you're, there's over 50,000 lines of code that's duplicated. So some of it might be uh by design and so we can't reduce all duplication. That's just not possible, but some of it might be just lazy cut and paste code. So um we want to take a deeper dive and see what does that code look like. And here's an example of um the code base and what's really happening. And we can look at this to say, is it possible to extract this into a common class and reduce the dependencies across components? So in this case, I do see that it's about nine line line lines of code, but it's across so many different other modules and components in the system that this may become a headache over time. For if you imagine that it's 1 million line a code based with 1 million lines of code and so many different modules. How are you going to be able to know where to find these changes?

So that's why uh code we use is another metric that you can look at in terms of evolution. So just given the time constraints and what we have for this um session, I'm not going to go through all the examples. But this is an example of how you can look at your architecture, not just from the very high level, but look at it, create some quality attributes uh for architecture and creating a level um systems. And I will go back to one of the slides where I mentioned about the different architectural qualities and then within each quality you can look at to see how can you, what metrics can you measure within your code base or at the architecture level at the deployment level? And see if you're monitoring that accurately and creating some thresholds and figuring out is this worth um uh uh refracturing right now. So that's how you create clean code. That's also how you create maintainable architectures. So now let's look at how do we prioritize this code uh or the work um that we're looking at. So typically in large or large scale legacy systems, uh there's just going to be too many issues that come up. So you're not, it's not realistic or even cost effective to solve all the problems and get to a per perfect architectural system. So we, we recommend typically to think about things that keep you awake at night.

So and that's usually comes from different things like what is the um uh what life cycle, what part of the life cycle is your application at typically older systems, legacy systems tend to have lower uh tend to be very large and have lower overall build quality. So we look at that and say, OK, what can we, we start with? Step one which is let's build a first look at the metrics that we've analyzed the architecture, identify the possible issues. There are establish some goals and create some related metrics. And this is important, create thresholds that are reasonable and manageable by developers and put them in categories and then prioritize that based on the risks and what the impact of it is to the overall system and also the likelihood of it occurring. And that's essentially what risk is.

And then you can put that as part of your backlog and start working at it uh one at a time and once you have that you're able to look at it, do it take an iterative approach to your metric space feedback loop that you have. So you can know your architecture. How do you know if your architecture is working well? It's essentially you're setting your goal and then you're setting your measure goal is could be something like I want um faster time to market and then you're setting some metrics to say, or it could be performance goals like uh like I want my system to, you know, respond in a certain uh within certain milliseconds and then you set some measurements and then you set some thresholds to say this is acceptable.

And once you provide that you implement a change or you write your code or you refactor your system and then you analyze it to see if your system has actually met that desired threshold that you've set. And if it does, that's great, the build passes. Um And you can implement all of this into your build pipeline, the build passes and then you're able to move on to the next issue. But if it doesn't, you go back and you look at the system to say, OK, where am I at the code base and see where, what can I, what else? Can I change here or refactor to improve the system? And that's essentially, and then you go back into a, essentially a measure, analyze and improve um uh feedback loop. So the idea here is to break it down into small chunks that's manageable and taking a derivative approach. And that brings us um essentially to uh how to manage a large amount of architectural work that you need to prioritize but do it in a manageable way and also do it in a way that's measurable and that you're getting feedback from and you know that you're improving the system over time.

Hm That takes us to one of the last um slides here, which is the key takeaway is that you can build maintainable architectures if you're making your issues visible and at all levels. And you and this is at the code based level, at the deployment level, at the architecture level. And you can ensure predictability by having quality measurements and having an attainable threshold that are relevant and meaningful to you and your business case and then continuously monitor them as your code of alls. So once you start improving your system, you're improving the overall quality of your system, you can scale this approach by increasing the quality thresholds that you've set to a higher standard and continue to improve them. So this is essentially how you end up with a maintainable software system.

So again, this piece is a bit theoretical, but hopefully the practical side that I mentioned, took a couple of metrics and examples, helps you understand how you can actually look at a system and then prioritize them and um help you with your work in your environment. OK. That brings me to the end of my talk. And um this leads just one last slide, which is we're always recruiting and we want more women in tech. So if you would like to join us, please do take a look at our website. And we are, you can also um join one of our symposiums that we have um to learn more about our organization or hop on to one of the virtual booths uh over the next couple of days and I'll be on one of them in my time zone in Eastern time zone and uh I'll be happy to chat and answer any of your questions.

So I hope this is helpful. Let me um take a look at some of the questions if you have any uh how do you find duplicate code? Are there tools available to identify duplication? Yes. Yes. So there are tools, there's um we've got to um our, the software, the company that I work at software improvement group has proprietary tools uh for large enterprise systems. But they are also open source tools they can use um like Sonar Cube and things like that, that you can implement within your own application to see how to detect some of those quality standards, but they're generally at the code base level. Um for architecture level, I don't, it's in high demand looking at metrics, but I haven't seen anything in the industry where you can say, OK, here's a specific set of uh architectural issues and here's the metrics that you look at. Um I think it's just because it's harder to create architectural qualities and measure them. So it's um but you can, you can start with using one of those open source tools or of course, a software improvement group that's our bread and butter is creating quality software and measuring them.

I hope that answers your question, Lina, or if you're uh feeling um excited about developing, you could probably develop a cry to see. Um So one way to look at it um is uh you can set a threshold. So let's take code duplication as an example, you could say, OK, how do I look at um code duplication from I'm going to set not, you know, 10 or 20 or 30 lines of code, consecutive lines of code within my uh software within say 11 place and see if that uh I could do a parser to see if that's replicated anywhere else.

So say nine or 10 continuous lines of code and you could probably write a script to pick that up in your own code base. But that involves a lot more work. I'm looking at some of the questions here Um So I'm, I'm just going to, we set out uh I work in a system that is near about two years old and it's so complex. It takes new senior developers 6 to 12 months to understand. Are there one first steps for simplifying the system that is complex for the sake of uh cleverness rather than meat? Uh That's a very good question. Yeah. So the, the question is, is there what is the first step for simplifying the system? And uh that is complex? And so, uh again, so the way to look at um prioritizing work is first, you have to do overall analysis of your architecture and see what the architectural issues are. So get a broad view of where what's happening in the system and then take some of those issues and look at it to see, say, what is your goal here. So in this case, you're saying your goal is from a developer standpoint, it takes, you know, close to a year for a senior developer to understand. So what you'd want to do in that case.

And again, I'm just um formulating an example here based on that is, let's say the component, let's say, if you've got a huge monolith, um and monolith tends to have, if you've got a huge monolith, let's say 50 1000 lines of code, that's not usually huge. But let's say that's the case, it's going to be hard to find where a piece of code is or functionality is. So what you can do is say, OK, let me break this down this huge monolith into smaller components. So as a developer, if my job is to work on say a shopping cart, I want to be able to know where to go and where to look at that piece of code and fix that and make sure there's no other dependencies across that. It's impacting other systems. So I would look at what your goal is, is for developers to understand a complex system and look at the system to say, OK, this, this seems to be a huge monolith. And so what can I do to make this easier for me and other developers to understand? So we could start looking and breaking it down. So we there's a this actual metric called component independence. So you look at the size of each component, let's say your module has a front and back end and uh metal wire layer. Uh look at the front end and say, why is this front end so big? Is there a way I can break it down into smaller front ends or and split that into smaller components?

And then look at that to say, OK, I'm gonna pull the shopping cart out and focus on that because that's where I need to focus on. That's the most critical part of our business. And I need developers to be able to work on that faster without impacting other things. So I hope that helps us set your goal and then figure out what the issue is and then break it down into smaller chunks and focus on that goal. Focus on making sure that's what you're trying to get at is that reduced developer, you know, time to understand a piece of software. So I hope that helps. Great. Thank you. Glad it was insightful. And also I'll be at um in the next couple of days, I'll be in one of these virtual booths. So if you have any more questions, this is something that's a passion of mine. I love doing this. I've been a developer and a manager, so I know all about that. I've written horrible, you know, uh quality code with lots of technical debt and I've also been managing and inherited things. So this is uh something close to my heart. So I'm very happy to talk about this. So join us in one of our uh virtual booths and uh um be happy to answer any more questions. Thank you. So I will be here. So I believe I still have a few more minutes.

So I'll continue to stay on here for a while. So if you have any questions, please uh share them. Otherwise, I'd also love to hear where people are from. It's always interesting for me, this is uh my first global event. So this would be great to see where everyone's from. So I'm to give you some background. I was born in India. I lived in Japan, Canada, Hong Kong. Um, and now I'm in the US so I love traveling so Florida US. But I've only been there once. I'd love to go there. Oh, my gosh. We, our country. And it's awesome. Are most, uh, people here? Software developers or managers just curious to see at what level? Goodbye. Mm. Ah, nearby in Boston. If you ever in New York City, look me up, I'll actually share my linkedin. So um you know how to reach me? She let him mad at you. The question. Do you have any tips to get tech departments to prioritize uh prioritize uh quality and maintainability issues over product initiatives in terms of measuring developer pain? Yeah, that's uh so this is why I think um yeah, that's a common problem and this is actually um why it's important to understand um how ma why maintainability is important because it is actually cross cutting across um a system and it really impacts the business goals.

And uh the one way to that I've seen that that's worked a little bit for me, is that a first? Typically it always helps if you have um as a leadership team that understands technology and I sent this pain points. But if they don't, one way that helps is to sort of do a reverse engineering to um kind of look at. OK. What are your initial goals you know, let's say if it's um innovation or uh flexibility or f and more features uh scale more features to customers faster. Try and see if you can connect that to a um uh a maintainability or quality metric to say, hey, if you want me to scale and add more features, look at the amount of dependencies there are in the system and you can use one of the tools to figure out what the dependencies look like.

And then another thing that always helps is figure out. So what we do at our organization is a cost estimate analysis. And this always helps from a management or leadership position because you always, people always look at numbers and and they, you know, somehow try and shock them to say, look at how many years it's going to take or how many person developer years it's going to take or months to fix this first.

And so we can't just scale if we say add more and more features, the system is going to crash at some point because they won't be able to cope with the load. So somehow figure out how you can tie these quality metrics or measurements that are actually data, right? So you can look at it practically, it's not subjective, it's objective and tie that to a cost measurement to say, OK, this is going to take X number of developer weeks or months to fix this and we need to fix this and break it down. Don't make it too overwhelming, make it so that it's easy for people to understand to say, ok, if we continue down this road, you could do two ways. One is forecasting. One is saying if we continue down this way of not fixing technical debt where you're going to end up, you know, at this place where it's just going to be a big ball of mud or look at it another way to make it uh you know, more optimistic ways to say I can break this down and look at this.

Oh, this is only going to take two weeks for my developer standpoint to fix this dependency issue or code we use issue or some entanglement. Let's prioritize this and put this in the back of this one technical debt and then share that as so it makes it easier and manageable from a leadership position to say, OK, I can have one developer or two developers look at this for a couple of weeks and fix this in long term.

So always tying it back to business goals and putting some cost estimation around it in terms of developer weeks or just the cost of little cost. So that's you have to go to speak their language essentially. So is there another question if you don't mind? I'm reading um I'm from Austin, Texas. I'm I'm just two months old as a B I developer. I was an accountant in the past 10 years. And is there any way I should approach to learn this brand new it world? There is um II I think of it as, I mean, to be honest, everyone has to learn this. So it's not just you, technology is changing so fast. And so there's just every other word every other day. I hear a new buzzword. So I think the only way that helps me, I don't know how others do it is just constantly, there's lots of books and blogs, but just going on stack over floor places where people you can trust um that have written books about things. And I would focus on usually just a general getting a general view of different things and figuring out what you're interested in. Say if you're a B I developer focus on, I think there's also B I tools out there uh and focus on maybe looking at that and diving deeper to see. OK, who is industry expert here and what are other people writing blogs about this and what's happening?

So that's the only way I know is just um researching and going online and just keeping up to date. But that's a common problem in technology for everyone. And then I just uh you, I hope that helps. I think we're at uh is it the question is more important to first learn the business or to learn coding? Huh? You've stumped me there because um it depends on I guess again, what you're looking at and what type of coding you're doing. Yeah. Um I would say for me it's personally, I always need to know the, why, why am I coding? Why am I doing something before I start doing something? So I need to understand. I find it also interesting to understand. OK, what business problem am I solving? So, in your case again, I'm not a um I'm not too familiar with the B I development but uh look at it from a standpoint uh sequel sequel. Oh yeah, then um I would say then, yeah, you need to know uh SQL development. So if, yeah, focus on, just focus on one or two books and I think there are lots of books on um just start with the basics first and the foundation just always figure out the foundation and then build upon other things, the more complex um topics just get, you know, spend a couple of months on just the foundation.

Uh That's what I would say and then figure out um the business part of it is really understanding, it's more, you know, some people, maybe they don't need to know what the business is. They just want to be able to code. That's fine. But for me, I always need to understand what am I solving and why am I coding? Why am I spending my, you know, eight hours a day coding? So it depends on your interest. I would say. But yeah, but I, but I think from a programming standpoint, always start with the fundamentals and foundations. So once you understand that, then you'll be able to connect the dots as you find new business problems. I hope that helps. You're welcome. I'm curious to actually see if um how many of you actually use measurements and metrics in your organization or in your work to define issues? Or is it generally there's feature based or how many of you use? Nonfunctional metrics? Yeah, that's feature. Yeah, that's typically, that's been my experience as well. So it's, it's really, it's an underrated topic. I'm talking about nonfunctional requirements. We really need it, right? I will stick around for a few more minutes. I think we're about uh five minutes past our session time.

Um So if anybody has any more questions, Instagram for maybe a couple more minutes and then I'll be so feel free to ask. And also please do give me feedback um on any, on those chats or anything. I'm always um looking to see how it can improve and make it more useful. All right. So there's no more questions. Thank you, everyone. And please do join one of the virtual booths. We'd love to chat uh, with you one on one and I hope you have a good rest of the day, afternoon, evening, wherever you're from and uh enjoy the rest of the uh um, women tech conference. Bye.