Both the Nonpaged Bytes Pool for individual processes and the total Nonpaged Bytes Pool have remained relatively stable during that time. I have been monitoring this for the last couple of weeks. I also forgot to mention earlier that none of the processes appear to be using an excessive number of handles either. I will continue to monitor this for the next few days. Two of them look to have ticked up slightly. For the most part all of the processes appear to be fairly static in their usage. So my question is this: Is there any other way to narrow down the cause of this memory leak?Īs suggested below, I have been logging the Pool Nonpaged Bytes for the last day or so to see if any process is trending up. sys files and again wasn't at all helpful with the problem at hand. Maybe the above error is given because we don't have either of these installed - but I don't know.įinally I tried: C:\windows\system32\driver\findstr /m /l Even *.sys We don't have win debugger or the win DDK installed (that I can see).
My colleague had to download pooltag.txt from the internet because we can't figure out where it is located. I've also tried using poolmon.exe /c but this always returns the error: unable to load msvcr70.dll/msvcp70.dllĪnd it doesn't create localtag.txt. However, before we go off restarting services, I'd like to have a little more certainty than just a "gut feeling". As we have had some problems with our Mail Server in the past (which may or may not be related to this issue) my gut feeling is that this is causing the issue. I see MailService.exe with an NP Pool value of 105K this is 36K higher than the value of the process listed second. If I open up Task Manager and view the process list. This is not exactly what you want to see when working on a production machine. I get the sense that most people seem to solve the problem by killing processes on the machine till they see the nonpaged memory reset.
My team has spent considerable time researching this problem and haven't been able to find a process to narrow this down to a specific application or service.
If we use poolmon.exe /g it shows the Mapped Driver as.
Since then we have been monitoring the nonpaged pool memory using Poolmon.exe and we believe we have identified the tag that is causing the problem. Eventually I traced it back to the httperr.log and found a whole lot of 1_Connections_Refused errors.įurther investigation seemed to indicate that we had reached the nonpaged pool limit. All we were getting were 503 errors until we rebooted the server then it was fine. We recently had an issue on our live server that caused our Web App to stop responding.