Welcome To The 2Pint FAQ Emporium. Choose Yer Poison!

BranchCache & BITS

The Industrial Strength Microsoft P2P Tech

BranchCache FAQ


ConfigMgr PeerCache

The Microsoft ConfigMgr P2P Feature

PeerCache FAQ


Delivery Optimization

The Latest Windows 10 Download And P2P Service

Delivery Optimization FAQ



2Pint Content Distribution and Management Engine.

StifleR FAQ


BranchCache FAQ: If you want your Hash, in a Dash, for little or no Cache..

“Where’s My Cheese?”

The BranchCache Cache is a database of content, and/or hashes of content. Simple.

BranchCache Maintains two caches, both on the Content Server and Client. It’s important to learn to distinguish between the two. By default, these are located at:

%WINDIR%\ ServiceProfiles\NetworkService\AppData\Local

The Publication Cache – \PeerDistPub folder

The is the HashCache – where generated hashes are stored.*

Content Server – the Hash Cache is populated is content requests come in to the server.

Client System – the Hash Cache is usually empty, unless content is injected (imported).

*If Windows Server Deduplication is enabled, the BranchCache HashCache can be empty, as BranchCache (V2) is designed to utilize the Deduplication Chunk Store.

The Republication Cache – \PeerDistRepub folder

This is the DataCache – where content is stored (and hashes that were downloaded by the client – but mostly content)

Content Server – Usually empty (unless the content server itself is a client), as the content server only needs to generate the Hashes of the data. The Content is already stored (as files).

Client System –  The data cache will be populated with downloaded BranchCache content.

“In Distributed Cache mode, BranchCache-enabled Windows clients cache copies of files downloaded from content servers (such as ConfigMgr Distribution Points) and make them available to other clients (peers) when requested. Distributed Cache mode is especially beneficial for locations that do not have a local server, but works equally well in large subnets with well-connected WAN links.”


Requires no local BranchCache server

Reduces the load on existing content servers by using P2P to share content

Integrates with Windows Server Deduplication to save even more WAN bandwidth

Easily configured and managed via Group Policy and/or ConfigMgr


Limited to a single subnet. So if, for instance you have separate subnets for wired vs wireless clients you will effectively have 2 distributed caches and content may well be copied twice to that location.

High mobility can mean that content can ‘go missing’ if a user has cached content (to a laptop for instance) and then relocates.

Initially requires two copies of content to be stored – one in the content download location and one in the BranchCache cache. (the content can be deleted and still be retrieved from the BranchCache cache)


Basic Operation

PC1 performs a ‘Get’ Request – but downloads the Identifiers (hash) that describe the content.

PC1 performs a local broadcast to see if anyone else has this content. If they do, PC1 will get it locally from peers.

If the content is NOT local, PC1 will go back to the server and get the content. Once downloaded, the content is then available to peers on that subnet.

PC2 performs a ‘Get’ Request – but downloads the Identifiers (hash) that describe the content.

PC2 performs a local broadcast to see if anyone else has this content.

PC1 has the content, so PC2 will transfer it locally from PC1.

So what is BranchCache? Here’s a nice description from the Microsoft Protocol docs on the subject:

“The goal of the Content Caching and Retrieval System is to decrease WAN network use. This is accomplished by caching content that has been retrieved over a WAN link (or any high latency link) from a content server by a set of actors (computers, applications, or people) connected to a local area network (LAN) and making it available for subsequent use within the LAN environment in a secure and effective manner. The overall effect is to reduce WAN traffic and therefore increase application performance.”

In other words, it’s WAN accelerator, which caches content locally to avoid unnecessary round trips to the data source by allowing clients on the same subnet to retrieve content from peer systems. **IT DOES NOT DOWNLOAD STUFF**


“It makes your network go faster”

By default, BranchCache uses port 80, which can cause conflicts, particularly for local Web servers like Apache, WAMP, Skype, IIS and some Symantec processes.  It’s also good security practice to change the port from its default to make it harder for them hackers.. So here’s how to do it.

DO NOT try to do this with the BranchCache GPO because, by default the GPO will create a Reservation URL targeting TCP Port 80 and if the settings are set by GPO you won’t be able to create a new Reservation URL. If you want to change the BranchCache default TCP Port, you have to rely on some good old Registry edits and NetSh.exe


Put BranchCache into ’Local’ Mode.

Netsh.exe br set service mode=local

Set ConnectPort in the registry

REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\PeerDist\DownloadManager\Peers\Connection" /v ListenPort /t REG_DWORD /d 1365 /f

Put BranchCache back into Distributed Mode

Netsh.exe br set service mode=distributed

If successful, you should see the following out put as BranchCache creates a new URL reservation for the new port number.

Configuring URL Reservation url=http://+:1365/116B50EB-ECE2-41ac-8429-9F9E963361B7/, sddl=D:A;;GX;;;NS) ... Succeeded
Enabling firewall rule group BranchCache - Content Retrieval (Uses HTTP)... Succeeded
Setting Service Start Type to Manual... Succeeded
Setting Service Mode... Succeeded
Starting Service... Succeeded

TIP: There is also a great PowerShell script which was developed for this very task. It’s available at:


Excellent question!

Multiple content servers serving up the same content need to have the exact same ‘Server Secret’ set. Otherwise the clients would see the content as different on each server.

If you have a cluster of content servers, then you’ll want to set the same ‘Server Secret’ for each server manually.

If you’re using SCCM and have multiple Distribution Points serving the same Packages or Apps, make sure that you enable BranchCache using the ConfigMgr UI – ConfigMgr then takes care of the server secret for all the DPs. Done!

It would still work if you don’t, but the risk is that your servers will be generating different hashes for the same content – which is not cool, and would result in less bandwidth savings. And savings is what we’re all about here..

You can set the Server Secret using good old netsh or Powershell.

netsh br set key “I Love 2Pint”

or  Set-BCSecretKey “I Love 2Pint” for Powershell

Here is a typical BranchCache operation from a security viewpoint.

-Server authenticates the client and performs authorization checks.

-Server transmits content information structure to the client only if the client has access.  Transfer happens over the accelerated protocol –HTTP/S etc.

-Client uses content information structure to calculate:

–segment id (public)

–encryption key (private)

-Client multicasts the segment id to find a peer with the data.

-Client downloads encrypted blocks from a peer and decrypts them with the encryption key

Cached data is stored in encrypted form in the BranchCache Cache

The above is Out-Of-The-Box behaviour. No further security configuration is required.

Note: Data in the Cache is not encrypted on Windows 7 but is on Windows 8 and above. But on even if it’s not encrypted on Windows 7 you cannot browse this info and need a high privilege account to access it. Data transferred over the wire is encrypted. Data in the Cache is not stored by file but by hash. So even if you know the name of the file you can’t get the data. In order to get to the data, you need to be admin and have the hash which is protected by the login to the IIS server.

Data in the Cache is not encrypted on Windows 7 but is on Windows 8 and above. But on even if it’s not encrypted on Windows 7 you cannot browse this info and need high privilege account to access it. Data transferred over the wire is encrypted. Data in the Cache is not stored by file but by hash. So even if you know the name of the file you can’t get the data. In order to get the data you need to be admin and have the hash which is protected by the login to the IIS server.

Hmm, alright, but only as it’s you. Here’s a crash course in BranchCache Hashing for beginners.

Server Secret. – Shhh. Used as a key in order to create a content-specific hash that is sent to clients.

Hash of Data (Hod)– the juice, the magic, the stuff dreams are made of. It’s what BranchCache uses to make sense of the files it needs to download.

Segment Secret  – Used as the encryption key to generate the Segment ID (along with the HoD)

Segment ID  – Used to locate the content once the Hash is downloaded

HoHoDk – say what?  = HMAC(Kp, HoD + C), where C is the ASCII string “MS_P2P_CACHING” with NUL terminator. In case you were wondering

Our testing shows an effectiveness of about 80-90% per client in a worst case scenario, (with a deployment executing at the same to about 50 machines without BITS being configured). If you use “Run as soon as possible” and let the machine policy trickle down machine per machine the effectiveness can be about 95-100% depending on content size vs number of machines vs WAN speed. This is also assuming that you have a BITS policy in place..

To guarantee maximum BranchCache efficiency, where only one copy of the content is transferred – take a look at our StifleR Product which takes away all the worry and hard work from content transfers!

Once the primary client data cache (aka Primary Republication Cache) reaches its max size, each time new segments are added to the cache, enough old segments are evicted and discarded based on a LRU (Least Recently Used) algorithm.

That happenes in Win7 and Win8.x

In Windows 7 and Windows 8.x that “max size” is always the max configured size (typically a percent of the target disk size), regardless of how much free space is available on disk.

In Windows 10 BranchCache uses an “effective max size”, which is in turn a percentage of the max configured size.

This percentage is 100% when the disk is not in low disk space conditions, it is less than 100% when the free space is below the low disk space threshold, and it gradually goes down to 0% as available free space on disk goes to zero.

That logic only sets the effective target max size. The algorithm used to make room / shrink the cache is still the same LRU one, so least-recently-used segments get kicked out first.

So basically, in Windows 10 you get a cool new dynamically shrinking Cache!

Yes, and you don’t need Windows Enterprise either as this is downloaded using BITS. To find out more go to: http://blogs.technet.com/b/appssrv/archive/2012/02/20/branchcache-for-exchange-2010-oab-download-how-to.aspx

But ignore the misinformed bit about needing Enterprise clients. BranchCache for everyone!

Yep, that can happen. Imagine a slow download of a large file. During the download, some bright spark decides to ‘tweak’ the file. BITS will detect that either the Timestamp or the Filesize has changed, and will restart the transfer. So be aware…

If you have a bunch of files, and BITS has downloaded most of them, and you decide to change all of them, BITS will ONLY re-transfer the file that it is currently downloading. It doesn’t go back and check that previously downloaded files have changed.

Chaos. Headless chicken scenario. End of the known World.

Or at least it can be. Depends on the speed of the link really and number of clients too. BITS doesn’t like to be kept waiting for too long. So what you may see are some Events in the BITS Event log with an ID of 61. This usually means that the BITS Job is set to a ‘Transient Error’ state – so all is not lost – it will simply retry in the configured Retry Interval.

Also, providing you are using BranchCache (and why wouldn’t you?) there exists the concept of a ‘Flash-crowd’ scenario. This is where BranchCache is clever enough to detect that it’s downloading a segment that has been requested by other clients. It generates a message to the BITS client – to the effect that you will see a burst (either 10, 15 or 20) Events in the BITS event log with an ID of 208. This tells the BITS client to ‘back-the-heck-off-sunshine’, because someone is downloading the content that you want and to go back across the WAN to get it would be both greedy and unnecessary.

The easiest way to check is to look at a job in the Event Viewer and go to the BITS-Client log under “Windows and Applications” folder.

There you will find an event with ID 4 signalling the end of the download, then you hit the details tab and check out the stats.


And if there only a partial amount BranchCached, check out the corresponding ID:s 59 and 60 for that job (One per file in the packge). They will be just before the event ID 4. Within the 60 Event, look for PeerProtocolFlag – if that isn’t set to “1” then the BITS job wasn’t trying to BranchCache at all..

Oh no. That would be too simple. Remember that the client setting in SCCM is for Background transfers only. So if you make a deployment ‘Available’  via Software Center etc as opposed to ‘Required’, then it will be a BITS Foreground transfer that is created and it will attempt to use whatever bandwidth it can get it’s grubby little hands on..

Not really love.

BITS queries internet gateway devices (IGD) and attempts to use their traffic counter information in its throttling calculations if possible. It’s usually NOT possible as you’d have to enable uPnP for that…which is not cool with most network types. It will also detect ‘user activity’ so if you’re busy downloading stuff in IE then it may be nice and throttle back slightly. But most of the time it just goes hell-for-leather and eats all the bandwidth that it can get its grubby little hands on. You can however configure BITS Policy which can make a big difference to the effectiveness of transfers.

Remember though that BITS policy only applies to Background transfers. Foreground transfers such as User initiated downloads from ConfigMgr Software Center or App Catalog are Foreground transfers – and will eat up all the bandwidth that they can.

Our StifleR toolset uses BITS policy dynamically to achieve bandwidth throttling based on various conditions such as latency etc. So just buy that right?

We got a blog for that  – Debugging Event 61 in BITS

Sometimes you might see Event 311 in your BITS event log. Usually it’s nothing to worry about. One of the most common instances of this event, occurs when you are trying to download a file in a BITS job that’s too small for BranchCache, which has by default a minimum filesize of 64k. BITS will just get the file from the server in a non-BC download. Here’s what you might see:

‘The BITS peer transfer with the XXX ID for the CCMDTS Job transfer job resulted in the following error: 0x80040010’

No, not really. When using BranchCache with BITS we recommend setting a BITS policy to ensure timeouts are avoided. Otherwise BranchCache over BITS performs well with over 40 machines with a 512Kbits/s link. It just takes a long time downloading large content. Also avoid “timed” deployments and let the clients drop in one by one via normal ConfigMgr policy.

28 days mate. Although, it’s not a straight 28 day thing… IF the data in the cache is accessed – say from another computer requesting that segment of data, then the 28 day counter is reset to zero. Hmmm, I can feel a new 2Pint App coming on..

Also, you can set the cache expiry time to whatever you like in Windows 8.x/10 and WS 2012/16 using the Set-BCDataCacheEntryMaxAge powershell cmdlet. This only applies to content placed in the cache AFTER this has run so bear that in mind. You might want to integrate this into your OS Build process, no?

Yeah, in Current Branch 🙂 It’s in the client settings.

YOu can’t configure ALL BranchCache settings however, so you may have to use a GPO or ridiculous hack of your choice..

Yes, but only in ConfigMgr CB. There is no UI to enable BranchCache for a Task Sequence in earlier versions. Luckily its only a oneliner to enable it. Check out the blog for info on this. You know you can do it!

The BranchCache Cache format differs between OS versions, and it significantly changed both from Windows 7 to 8 and from 8 to 8.1 and 10. However, when the BranchCache service starts, any existing Windows 7 or Windows 8.x cache which is present in the cache is automatically migrated to the current/latest format. Yipee! One less thing to worry about..

Yes you can. Its not a straight forward process, but it is technically possible! Contact us for more info maybe..

Yep. And it can be really useful. Imagine you want to re-image all the machines at a remote location at the end of a bit of string (or  a T1 link maybe). Using this method you can take the files that you need, export them to a USB or other media, then use that to ‘seed’ the Cache on one of the remote machines. Then when they need to access that data all they need to get from the remote server is the Content Information – reducing WAN usage massively.

On The Server

  1. Stage the Content
    Publish-BCWebContent -Path c:\inetpub\wwwroot\MyBigWIMFile.WIM -StageData -StagingPath c:\temp
  2. Export the content to a package (creates a handy .zip file) 

Export-BCCachePackage -StagingPath c:\temp -Destination c:\MyBCPackage

On The Client

1.Import the Package Created in Step 2 Above

Import-BCCachePackage -Path C:\temp\pkg\PeerDistPackage.zip

..And you’re done! The content and is in the client cache ready for access by other computers, and the hashes are on the server (they get generated during the Staging part).

Yes, the “Publish-BCWebContent” PowerShell commands works on client based operating systems as well. Smart huh? It’s the same procedure as for exporting from a content server, but you must make sure that the Client computer has the same server-secret as the content server.

BranchCache via SMB is closely integrated with the Offline Files feature in Windows. So SMB integration can be unpredictable due to how Offline Files works (it’s a read ahead cache that is fetching data it thinks you will access in the near future).

What you may see in your testing is that although you are flushing the BranchCache cache, the content may well be in the Offline Files cache (worth checking). I have seen this when testing SMB transfers, although it was a couple of years back so the old brain is a bit fuzzy, but it was quite hard to get consistent results due to CSC going off and populating the cache after several repeated attempts.

Also remember reading that even setting the latency to 0 does NOT guarantee that BC will kick in. There’s code in branchcache to split the file transfer between the server and the client, but because the server is so fast, branchcache will kick in too late.. This seems to be fixed in V2 BranchCache though.
In a nutshell? hmmmmm

Yes, but then you have to deal with the latency issues/configuration etc and its not sure that it will be as efficient.

First of all, the latency only applies to BranchCache over SMB, so no HTTP traffic is “influenced” as some people on the t’internet have tried to say. BranchCache over HTTP it always tries to BranchCache.

So if the web traffic always tries to use BranchCache, why doesnt SMB? And whats with this magic latency number? Sounds technical. Well, the SMB Latency configuration was designed to give the user a “good user experience” when browsing for files in Explorer and downloading their fav pics of the CEO and other meaningless sales presentations. Typically you would want to access something as fast as possible, right? For fast networks BranchCache will always be slower, but latency and available bandwidth are two very different things, measuring bandwidth is hard (slow) and latency detection is a piece of cake (fast). So since measuring bandwidth every time is impossible, the devs went with latency… the rest is history.

With that said, there isn’t really any downside to lowering it from the default 80ms, except that things could go slower if you have pretty fast links, but if your links were that good you probably wouldn’t be reading this now would you? Typically, if you have WAN links though, its better to always cache.

WS2008R2 –  Enterprise Edition. WS2012/16 All Editions

It has to be http 1.1 – so if your requests are going through a Proxy – make sure that it’s not forwarding your requests at anything less than that version.

Yes, look under the network components when creating your image. Over and out..

Hmm. It’s not the best news sunshine, but it’s Not The End Of The World (NTEOTW)… Win7 BranchCache is not quite as great as Win8.x/10 BranchCache, and although the two versions (which are helpfully named V1 and V2) CAN co-exist, there can be problems. A bit like two Brothers who get on most of the time but when they don’t, fisticuffs can occur.

So assuming you have 2012/2016 servers, by default, they will generate both V1 or V2 hashes depending on the requesting client.

If you want your Windows 10 and Windows 7 clients to share content, then you need to set a group policy for the clients that effectively downgrades the W10 clients to use the V1 hashes – then everyone’s on the same page. If you do nothing, then you can get 2 X downloads per subnet of the same content where there are mixed clients.

V2 has some performance benefits over V1, so you really want to switch as fast as you can. Our StifleR product will use any V2 machines on the subnet and generate V1 content for the Windows 7 clients on the fly. So even the V1 machines gets the benefit of the updated V2 deduplication features. This will give you huge savings when doing OSD.

Otherwise we recommend going with them as separate (unless bandwidth is extremely critical) entities and let your 10 machines go with V2.

One thing to look for on the Content Servers is to;


  1. Make sure they have enough space to store both V1 and V2 hashes. 1% of disk size is default and you can fill up fast. File storage DeDuplication can help here.
  2. Make sure all servers have the same server secret, the BranchCache checkbox on the DP in ConfigMgr takes care of that for you automagically.


You could just use our StifleR software which takes care of it all for you (and a lot more besides)




Yes and No! Out of the box, no. But we added it in there, so if you use the 2Pint BranchCache for OSD Toolkit,  you can create a WinPE version with BranchCache and all of its awesomeness available.

Alternatively you can use WinPE Peer Cache in ConfigMgr land.

A lot of people ask this, and the answer is more of a “well it depends”.

In Windows Enterprise, BranchCache is AUTOMATICALLY enabled for all HTTP traffic using WININET.DLL and WINHTTP.DLL (most windows apps), so that’s all good. It also supports BranchCache for SMB (Files in Explorer etc).

In Windows PRO versions, BITS + BranchCache work fine (in HTTP BITS mode only), nothing else. So no ‘native’ BranchCache support (IE, File Explorer etc), but the APIs are still there..which is what we use here at 2Pint. This means of course that BranchCache is fully available for ConfigMgr and Intune as they use BITS for their HTTP transfers.

So to recap: BITS + BranchCache is on all Windows platforms, regular HTTP & SMB downloads only for Enterprise!

UPDATE: Here’s the official word if you need it. https://technet.microsoft.com/en-us/library/mt613461.aspx#bkmk_os

You have these two pints (good start) . Exactly the same right? Sure. But then your buddy drops a peanut into one of them…hmmm…not the same anymore! Actually, have you ever done that? Try it – the results will astound and amaze you but you have to wait a while and it works best with fairly fizzy beer…

Well actually these two pints are ‘almost’ the same, but the bottom of one of the glasses now has a peanut in it. Following so far?

Well it’s the same with BC and similar files. When BITS wants BranchCache  to help transfer a file , BranchCache will look for common segments across files, and any that are similar will only be copied once. Cool! The power of DeDuplication on the wire.

This works really well with content that is very similar so think .wim files etc..

You can use the Powershell command – Publish-BCWebContent for HTTP stuff or Publish-BCFileContent for files in shared folders (SMB)

Why would you need to do this? Well if you don’t, new content (such as a new package or application on a ConfigMgr DP) doesn’t get the hashes generated until the first client requests the content. So if the server is really busy, the first client might request content, and get no hash. Aaaaargh! No Hash = No Cache right? So yeah that clients will download a copy of the content but would be unable to stick it into the BranchCache cache. So the SECOND client would probably be the one to get the hash and start caching.. Not the end of the world but not as efficient as we would like it to be.

In the SMB World

Hashgen.exe triggers the generation of hashes that are stored by SMB, and you can run this whenever you need to.  These hashes are not available to HTTP.sys, which is where the BranchCache component that generates HTTP responses lives. If the files are stored on a deduplicated volume then there is no need to generate hashes as the deduplication process also generates hashes compatible with BranchCache. Thus BranchCache works best with SMB when accessing a deduplicated volume on a Windows Server 2012 OS.

Too much Hash, you gotta be kidding!

But, watch the limits: If you fill the ReDist Hash Database (1% by default) you get an error in ConfigMgr:
“svchost (7832) PeerDistPubCacheJetInstance: The database C:\Windows\ServiceProfiles\NetworkService\AppData\Local\PeerDistPub\PeerDistPubCatalog.pds has reached its maximum size of 200 MB.
If the database cannot be restarted, an offline defragmentation may be performed to reduce its size.”
Whats up with that? Run the following from the command line: “netsh.exe branchcache show status all”
That will show you the used caches, their locations etc. Is the Cache full? The Publication Cache (hash cache) you most likely have issues with is the one for servers generating the hashes. Its about 1/2000 to the content size, so set it to something like (current content size + Expected Growth)/2000. By default its set to 1% of the disk space I believe. Since most people have a small C: for the OS the DB gets small. You set it by doing a netsh command as well.

That’s a lie, floating around on some websites, hashes are kept over reboots. Well… its true if you are on 2008 R2, but that is legacy as we all know
If you are on 2008 R2 and bothered by this, then contact us and we will tell you how to use our BranchCache Tool to regenerate the hashes.
On Server 2012/16 you can reboot as much as you like!

get-ADObject -Filter {objectClass -eq 'serviceConnectionPoint' -and Name -eq 'BranchCacheHostedCacheSCP'}

This will list all your Hosted Cache SCP thingies


get-ADObject -Filter {objectClass -eq 'serviceConnectionPoint' -and Name -eq 'BranchCacheHostedCacheSCP'}| Remove-Adobject

This will Nuke ‘Em so the usual disclaimer applies – i.e don’t fiddle with it stoopid!



ConfigMgr Peer Cache: Yet Another Microsoft P2P Technology – Hurrah!

If you make some changes to the content – say for instance you added some .MST files to an MSI Application, this would be treated as a new version when you update the Distribution Points. Peer Cache sources that had already cached the previous version would then have to download the new version of the content in full (and then report back to the MP) before it could be shared.
Bit gnarly, but something to be aware of..

The ‘Holy Grail’ of P2P COntent Distribution is to have a single copy of the content (Package/App/Update etc) copied from the ConfigMgr Distribution Point – and for P2P sharing to take care of the rest. But…

Each Peer Cache Enabled client (assuming that they are running on workstations) is limited to 20 incoming connections, so having a single Peer Cache enabled client per location is perhaps not the best idea 🙂 But then we don’t recommend that you enable ALL clients for Peer Cache either! Probably somewhere in between..

Do the maths, and remember that the Peer Cache client must first report back to the site (via the MP) that it has the content before it can share it – this only happens once per 24hrs. So you may want to create a collection of ‘Pioneer’ clients who will cache the content first.

Also, when the other clients request the content, they would get a randomized list of Peer Cache sources that had the content. So unless you targetted a single client first, the load should be spread automatically.

Using BranchCache alongside Peer Cache will also help to expedite the spread of content amongst Peer Cache sources.

There are a couple of setting within Peer Cache that relate to security.

1.The Peer Cache ‘SuperPeer’ Web Service on the clients can be forced to use HTTPS – you can set this in the Client Settings

2. Peer Cache requires the use of a Network Access Account, which must have Full Control to the cache folder on each client:- %windir%\ccmcache

3. If you are using Windows Firewall, rules enabling the ports required to use Peer Cache are automatically created by ConfigMgr

All transfers are performed by the BITS service in the SYSTEM context

A client that is operating as a peer content source sends a list of content it has cached to the ConfigMgr Site via its Management Point via state messages – so every time a Peer Cache source adds (or removes) content to the CCMCACHE it gets reported back to the site DB.

When a client in the same boundary group requests content, each peer cache source which has the content is returned as a potential content source along with the distribution points and other content source locations in that boundary group. This happens via a standard Content Location Request.

The client then chooses a Peer Cache source from the list randomly – as it would with regular DPs. If that source is unavailable it tries the next in the list and so on. If no Peer Cache source is available, the client will fallback to using a standard DP. Note: Currently if a Peer Cache source is unavailable – the failover to the next Content source is 7.5 minutes – so take care which machines are Peer Cache sources.

This can be a gotcha. Peer Cache is constrained by Boundary Groups, but if a Peer Cache client moves to a new Boundary Group, the data for that system in the ConfigMgr SQl DB will not be updated until the next Hardware Inventory cycle. So one to watch out for, and a Peer Cache client that moves could potentially serve data to peers across a WAN link until the move is registered with ConfigMgr. Oops!

Probably not!

You should first look at the type of content that you wish to share, mobility and hardware specifications (disk size etc) of your ConfigMgr estate.

Static systems that don’t move around will make the best Peer Cache clients, so if for example you have a ton of PC based ATMs or PoS machines, they could make useful Peer Cache clients.

There’s a nice article here on creating collections for suitable Peer Cache clients.

Peer Cache clients share ConfigMgr Content only. Any type of ConfigMgr content can be shared, so think Packages, Apps, Updates etc. In order for a ConfigMgr client to share content, it must be designated a Peer Cache client, and the content must be targeted to that system.

So unlike BranchCache, which can share IE/Edge downloads and content, or generic SMB content, Peer Cache clients can only share content that they have in their ConfigMgr cache.

No. Unlike BranchCache, Peer Cache can share content across subnets. ConfigMgr Boundary Groups are used to determine where content can be shared – i.e content will only be available to ConfigMgr Clients in the same Boundary Group. Even if fallback is enabled, Peer Cache clients will never receive content from a Neighbor Boundary Group.

So, get your Boundary Groups in order! – ConfigMgr Boundary Groups

Hellz yeah! BranchCache and Peer Cache make great bedfellows. BranchCache is fairly invisible to the naked eye, but it can help to get content into the ConfigMgr cache quickly, so it can then be shared via Peer Cache.

The new ‘Client Data Sources’ dashboard in ConfigMgr BC 1610 shows data sources for BranchCache and Peer Cache

Well, simply put, this is Microsoft ConfigMgr’s very own Peer to Peer (P2P) technology. It is designed to reduce bandwidth usage over slow (or fast) links by sharing content between Peers (Configmgr  Clients in this case) instead of having every client go back to the Distribution Point for content.

Content is shared from the ConfigMgr Cache and as of ConfigMgr Current Branch Build 1610, all content types can be shared.

There are also now reporting Dashboards and Reports that help to visualise your P2P success rates via Peer Cache and BranchCache.

Want more? read on..

Delivery Optimization Doris: BranchCache Bob’s Secret Crush!

Delivery Optimization (DO) has some ‘built-in’ checks and balances to ensure that the service doesn’t hog resources – be that Network , CPU or Memory..

DO will suspend all seeding activity if the Operating System indicates that it’s in a “low resource” state. (Introduced in  1607)

It also maintains a ‘minimum spec’ for devices to partake in Peer 2 Peer transfers (see this article for more details)

The DO Service continuously measures the available upload bandwidth and backs off if the connection is bad.

Doesn’t engage in P2P activities if the system is on Battery


More granularity of control over these parameters are being added in each new release – so rest assured that the DO service has your back!

We are working closely with Microsoft on this to make sure that DO integrates easily and effectively with StifleR – just as BranchCache/BITS does now.

In planning:
Automatic configuration of DO Group ID (i.e the boundary of peer 2 peer sharing) – DONE!

Auto Bandwidth throttling – DONE!

Reporting – DONE!

Adding new stuff all the time – keep checking back


The DO Service will not download if it detects that it’s using a ‘Metered Connection’ – which is fine for 4G type connections but WiFi is treated as a fast connection so if you don’t want to download over WiFi you need to configure your WiFi connection as Metered.

DO also has policies available that let you specify either a defined amount of bandwidth (in KB/s) or a percentage of available bandwidth.

See this article about DO policy for more details

Yes, via Policy. The one you want is:

Maximum Download Bandwidth (DOMaxDownloadBandwidth)

This setting specifies the maximum download bandwidth that can be used across all concurrent Delivery Optimization downloads in kilobytes per second (KB/s). A default value of 0 means that Delivery Optimization will dynamically adjust and optimize the maximum bandwidth used.

Now just how DO “will dynamically adjust and optimize the maximum bandwidth used” is anybody’s guess!

Currently – DO is only used with ConfigMgr if you enable Express Updates.

Full integration with ConfigMgr is coming soon however – as announced at Ignite 2017!

Currently, it’s not easy! But if you have the latest Windows Insider build (14986 at time of writing), you can use two new PowerShell cmdlets to take a look at DO both when downloads are in-flight and afterwards. We blogged it!

New PowerShell Cmdlets 

By default, in Enterprise and EDU editions of Windows, DO will only P2P with systems within the internal network. As of the 1709 update, you can configure this via GPO or MDM policies however, and the DO service can obtain content from other peers on the internet if desired. But, like, would ya?

DO is based on a Cloud based service that tracks content and peers, which makes it massively scalable and perfect for modern systems management.

A DO system will also only download content that is relevant to itself, so a Lenovo system wouldn’t be able to download Dell drivers and share them with peers..

Yes, but you have to configure that option via GPO

You can use Group Policy or an MDM solution like Intune to configure Delivery Optimization.

Group Policy: Computer Configuration\Policies\Administrative Templates\Windows Components\Delivery Optimization
MDM: .Vendor/MSFT/Policy/Config/DeliveryOptimization
The setting that you need is DODownloadMode – which should be set to Bypass (100) so that BITS BranchCache will always be used.

The Group ID is used to prevent Peer transfers from  occurring across undesirable network links. You can set a Group ID via GPO for each location so that the DO P2P transfers are always contained within that GroupID and location.

If this is not set via GPO (or manually), DO uses Automatic Group Detection:

In Windows 10 1607

The AD site is checked when the clients starts and this is used as the group ID.

When AD site is not available, DO will automatically use the authenticated domain SID as the default Group.
A new network connection also triggers a new query of the AD site so that should place the client in the correct group if the system is moved to a new location.

(In Windows 101511 the AD site is not used – just the domain SID.)

You can also set a group ID (must be a GUID) for a location manually and this will enforce P2P boundaries within that Group ID (shown below) regardless of Domain etc.


Delivery Optimization (henceforth known as “DO”) is a download and P2P service which was introduced in Windows 10. It is responsible for downloading and sharing various types of content but primarily Microsoft Windows Updates, Windows Store Apps, and XBOX shizzle..

Unlike other Microsoft P2P technologies like BranchCache, it is currently controlled by Microsoft – that is the content that it downloads can only be provided by Microsoft.


StifleR FAQ: The Why, What, Where and When of our Content Distribution and Management Engine

We are working closely with Microsoft on this to make sure that DO integrates easily and effectively with StifleR – just as BranchCache/BITS does now.

In planning:
Automatic configuration of DO Group ID (i.e the boundary of peer 2 peer sharing) – DONE!

Auto Bandwidth throttling – DONE!

Reporting – DONE!

Adding new stuff all the time – keep checking back

Yes, insomuch as InTune uses BITS/BranchCache, so it’s pretty seamless!

Coming soon – InTune will start to leverage the new Delivery Optimization Peer-to-Peer and caching service – watch this Ignite session with 2Pint’s very own Andreas for more info about Delivery Optimization!

It can make sure that Peer Cache content transfers are optimized in terms of priority, and can help BranchCache and Peer Cache to work Better together. You can also see all your Peer Cache transfers in the StifleR dashboards as they happen.

Think of StifleR as the Air Traffic Controller for your in-flight Content Transfers. Nomad and OneSite do the download, P2P and caching. We simply work with the Microsoft Peer2Peer technologies that you already own, and provide the bandwidth control and prioritization.

Also, Nomad and Onsite only work with ConfigMgr content, wheras StifleR can control ConfigMgr content and a whole lot more. In fact some of our largest customers don’t even use ConfigMgr.

With our StifleR’s MOM add-on – you can extend your bandwidth management capabilities to all your Cloud hosted content too! Think Office 365 and beyond..

BITS, BranchCache and Peer Cache, are the main Microsoft download and Peer to Peer technologies within Windows. None of them provide overall visibility, real-time control and reactive Bandwidth Management.

You can get good results with these technologies, but if you add StifleR into the mix you can achieve awesome P2P efficiency leading to bandwidth savings, less server infrastructure, and happy users.

StifleR ensures that your users obtain the content that they need from a source local to them in the most efficient way possible, without upsetting the network folks.

Basically moving software from A to B anywhere in the world like a complete boss is our business. Locally, Nationally, Internationally and to infinity and beyond (planned), from the Cloud, local servers, or corporate Data Centre, StifleR turbo charges your infrastructure to deliver content reliably and efficiently over WAN, LAN, Wifi or HiFi (ok not HiFi but you get the idea).

Think of it this way. Would you get onto an airplane knowing that Air Traffic Control Radar wasn’t working? ’Nuff said. Without StifleR your WAN and LAN traffic is ‘flying blind’ and ‘Jimmy in the Mailroom’s Game of Thrones download’ could be potentially blocking business critical content such as your Point-of-Sale data or a critical Windows Update.
As we say – ‘StifleR – How to be the Network team’s best friend in 5 minutes or less’

Once configured with the relevant Network and Location data, StifleR is designed to run on ‘Auto-Pilot’ and will transparently and automatically manage and adjust bandwidth controls across the Enterprise. In cases where you need to change settings manually, to expedite certain custom deployments for example, configuration changes can be automated and scripted via the 2Pint StifleR WMI Provider. There is no need for a GUI, no console, no workbench…just reliable Microsoft Automation.

Once configured, the sky’s the limit as far as automation goes. You can use WMI events or Scheduled Tasks to trigger scripts that set different bandwidth limits outside business hours for example.

The Rules define what happens to each content download.
The StifleR client will check through its queue of active BITS downloads and will prioritize them according to a locally held XML configuration file containing a set of rules that are configured centrally by the administrator and then automatically distributed to clients.

This XML file contains a simple rule set that defines the content download jobs and the priority that the administrator has assigned to each job type.
So for instance, all Outlook Global Address Book sync could be set to the BITS priority of LOW, while Windows Update patches would be set to HIGH. Since BITS transfers only one job at a time, using round robin within its own priority, you can effectively control which downloads should be completed ahead of others. All of this configuration can be changed centrally and replicated to clients in seconds.

Download the StifleR Rules Guide Doc

Each client makes a lightweight connection to the StifleR server and sends up information about the current content download queue. This information is evaluated and the server assigns the most suitable system per Location to be the “Red Leader”. The Red Leader system is then responsible for downloading content and obeying the defined network bandwidth limits for that Location. Other clients at that same Location, that require the same content, will not download from the remote location but instead will wait and then transfer the data locally from the Red Leader using Microsoft BranchCache/Peer Cache etc P2P transfer functions. The end result is that rather than all clients downloading remote content, WAN traffic is limited to between the single Red Leader and the remote content server. Should the current Red Leader system become unavailable, a new Red Leader is automatically selected, which results in uninterrupted, efficient and dynamic workflow.

Of course you want to see where all yer content is. So we give you snazzy real-time updates on that. You’re welcome. Our StifleR Dashboards give you a great view of all your content transfers while they are in-flight.
If you want to see what happened to your content over time, including some awesome stats on P2P effectiveness, we have our free Reporting Solution for that.

Well decide for yourself! – Unlike most other vendors, we just publish our pricing right here on the 2Pint website. Did you know also that ALL our software is FREE for Education and Non-Profit Orgs?

Of course! We made it configurable via good old WMI/PowerShell – cos we all know and love that don’t we? We don’t need no stinking GUI! #realmendontclick Seriously though, automation is the key here, and most Admins will be able to get up and running with StifleR in no time at all.

Well. Having said that, we do have some frighteningly beautiful Dashboards – but they are purely for visualization of your content transfers and P2P magic.

Oh no Sir! You can probably nuke quite a few servers, because we make downloads so efficient you won’t need them at low-bandwidth sites anymore. Hurrah – even more cash saved! Even in well-connected LAN speed locations, StifleR can remove load from Distribution Points, WSUS servers, Content Servers etc so that you need much less infrastructure than before.

Yep, it scales magnificently, because it’s built on Microsoft SignalR, which is designed to scale to Everest-type proportions..For super-large deployments, SignalR can be hosted over a REDIS or SQL BackPlane, then we are talking epic blockbuster proportions.

StifleR can manage downloads from the Cloud, and will happily reduce bandwidth usage for those transfers. However, often that type of content is not enabled for Peer 2 Peer sharing, so for
Office 365, and other Cloud based stuff? You need to talk to StifleR’s MOM! That’s a separate add-on that handles Cloud based content and preps it for P2P caching and sharing. Have a look here for more info.

Ya, we do that too OK? Where InTune uses BITS – we can manage the downloads. As for other solutions, anything that uses Microsoft Peer 2 Peer services in Windows – we got it covered. Currently BITS, BranchCache and ConfigMgr Peer Cache. In 2017 we will be adding support for Delivery Optimization, generic HTTP transfers, and lots more!

Oh yes. The ConfigMgr Peer Cache P2P function and the BITS and BranchCache services are key to Software Distribution in Microsoft ConfigMgr. StifleR simply builds on this and improves Software Deployment success by better managing the downloads. No configuration changes are required to SCCM – just business as usual.