Here’s a common scenario that you’ll come across when trying to troubleshoot failed downloads in BITS.

BITS Event ID 61 shows that the BITS Download stopped with an error, as ID 60 is a successful download.  Usually the job status is TRANSIENT_ERROR (you can check this using  BITSADMIN /LIST /ALLUSERS)

Here’s the offending event..



The status code is then 0x80072EFE, this is HEX and can be broken down into several parts. 0x8007 specifies that this is an internal Windows error. The 2EFE part can then be converted to decimal with calculator in programmer mode.

Then select Decimal:

So the error is 12030. Sadly this is not part of the default loaded error messages so a “net helpmsg 12030” command gives you nothing. Instead we have to turn to google, or as we 2Pinters know that BITS uses WININET for all its communication we can then go to the following URL:

Here we can see that a 12030 error (Note: The list is not in numeric order!). The following info is specified:



The connection with the server has been terminated.

So, what does this mean? Basically this is typical when the bandwidth is not enough to keep the connection open to the server, i.e it times out and gets closed.


If you then look in the event log again, you can see an event ID 59, which mean that BITS is starting the same job again, basically the same second it stopped? Confusing, well after 3 pints maybe, but we 2Pinters know that it’s impossible for the BITS client to know that it was the lack of bandwidth that stopped us. If BITS for some reason fails to understand that the bandwidth is to low, it just thinks that maybe the server went down or that it was a temporary glitch in the network. So it tries again. It does this at the start of a transfer – 3 times within a second. If it still can’t access the content, it will go into retry mode – where it backs off and waits for the specified ‘RETRY AFTER’ setting within the job. This defaults to 60 seconds. Of course this is fine – unless you happen to have 10,000 SCCM clients who all kick of at the same time, do the 3 attempts, fail, and go into a retry cycle at the same time as the other 9,999..

So, we have a little free tool on the way that can randomize the retry period. It catches these 61 errors as they happen and gives the job a configurable random timeout (1-min to 2 hours by default). If you want to test this out for us – drop us a line!