![]() You can even create alerts to be notified when there are abnormalities.Ī list of resources about this topic, that I find useful and wished I had found earlier: NET environment this usually means looking at the metrics in Application Insights. When the cause is found, you can fix it locally and verify the fix locally. There's no need to try to reproduce the problem because you can access all the data you need. The fastest way to look into a memory leak is to create a dump file of the process in production.And when you have a real problem, the analyzing part will also be quick. While I thought this was hard and time-consuming, creating a dump file just takes a few seconds. When you got a hunch about having a leak, immediately create a dump file and analyze it. The data is right in front of you, you just have to go get it. Don't wait and guess whether there's a memory leak. ![]() It's focused on PerfView, but the gained knowledge can also be used while reading the data in other tools. I found it beginner friendly, while still going in-depth. To get a better understanding at how I should intepret the data I read through. While this leak was obvious, I still felt a little bit overwhelmed with all the data. The starting page of dotMemory A pie chart with a more in-depth look into the dump When the dump file was imported, the first graph and data table made it very obvious that we had a memory leak.Ī file logger was using 90% of the memory, equivalent to 3.5GB. There's also PerfView, which is free to use. Personally, I like it more visualized and thus I imported the file into dotMemory to analyze it. ![]() This gives you a *.dmp file which you can analyze with the dotnet dump analyze command. To create a dump file, use the dotnet dump collect command, or if you can log in on the server by opening the task manager, right-clicking on a process, and selecting "Create a dump file". All of the information that we need is already there, it just needs to be collected and analyzed. Most of the bugs can't, or hard to, look into a production environment.īy creating a dump file of the process, we have a way to look into the process. So why try to reproduce the problem when it's already occurring in a production environment? While being impressed with these tools, I wasn't able to reproduce the memory leak locally despite my effort to mimick the traffic towards the API with Artillery. In this tutorial you use the dotnet commands, dotnet trace, dotnet counters, and dotnet dump to find and troubleshoot process. Since this was a first-timer for me, I followed the Microsoft Docs (which were well written and was exactly what I was looking for) Debug a memory leak in. This gives you some insights into the problem, and you exactly know where and when that problem occurs. I think that you must be able to reproduce a problem first to solve it. Which is up to 5 times more resources compared to other APIs. The second day, this happened again and it was worse, the API with the memory leak was almost consuming 4GB. The problem is that the memory increases linearly over time, and without it dropping back to its normal consumption. Remember that high memory usage doesn't always mean that there's a memory leak. We noticed that the process of a new API was consuming more memory compared to other processes.Īt first, we didn't think much of it and we assumed it was normal because this API receives a lot of requests.Īt the end of the day, the API almost tripled its memory consumption and at this time we started thinking that we had a memory leak. Throughout my career, I've been warned, and I've warned about these leaks and why it's so important to release unmanaged resources with the Dispose method. Last week it finally happened, I saw my first memory leak in production - that I know of - and over time it was eating up all the memory.
0 Comments
Leave a Reply. |