So recently there was a Twitter discussion that sparked my interest, it was about a very common scenario that we have been discussing with our customers over the last few years. The scenario is the following:
Customer with a large (or small) number of remote offices with SCCM Distribution Points in them currently.
They are facing issues with content distribution for one or all of the following reasons:
- Servers are getting old, should they replace them with new ones?
- Disk space is running out…
- They distribute all content to ALL DP’s ALWAYS
- The DP’s are often out of sync, see point 2 and 3
- They are typically configured to disallow any content replication during business hours, which with point 3 and 4 creates a difficult scenario
- They can’t distribute anything company wide faster than a month or two, cos it takes forever to get the content out there
- They have too many DP’s to manage so most of the work week is a whack-a-mole of DP ‘fixing’
- Each ConfigMgr upgrade is a mess, cos DP’s don’t have any space, out of reach etc.
So often there is a brainstorming process to attempt to come up with a solution. Something along the lines of “couldn’t we solve this with a bit of peer-to-peer?” The answer is all too OFTEN wrongly assumed to be a definite NO, with arguments like “We have our local servers there and it feels great”, or “they have 4000 GB of content on them and we can’t store that on our client machines.”
Well, firstly – our stats show that most of that 4000GB content is just being used to keep the office warm and the world a little hotter. So, isn’t there a better way to deal with this, by only having the content that is REQUIRED at that location, and at the same time keep the server hugging people happy? Ya think?
A better way: Peer-to-Server & Server to Peer (P2S2P).
See, the issue is that in 99% of the time, it’s more bandwidth friendly to send the stuff that you need, with P2P over the wire, and then not care about the rest of the content. Of course, this depends, so this post is not about the greatness of client P2P, it’s about another great unknown feature of ConfigMgr content distribution – used together with BranchCache in Hosted Cache (HC) mode. Basically client-server peer-to-server. I won’t go into the details on how Hosted Cache mode works, but simply put, it involves having a server that sites in the local office, acting as a ‘kind of a proxy’ for the P2P content. You get the benefit of P2P, but also the benefit of a large local beefy repository that won’t leave the premises or shutdown unexpectedly.
The concept goes like this; we take your static/pull DP and turn it into a BranchCache Hosted Cache server. That way, you don’t need to distribute content to any remote DPs before you can deploy to clients. You just deploy direct to those remote locations from a few centrally (or Cloud) hosted DPs. Hurrah. But how does the Hosted Cache client help in the application deployment scenario?
Firstly, clients will request content from their remote HQ/Cloud DP’s. As they start downloading, they will check in with their new best buddy (the Hosted Cache component); “Hey, do you have this first block of data for the new sales app installer?”, if this is new file, and that block of data does not exist in this location, the server of course will say, ‘Nope, never seen that dude’. Here is where the beauty of Hosted Cache server comes into play. As the client then downloads the first block of data, it will reply back to the HC server, “Hey, I got this chunk of data now, do you want it?”. Typically, the server will say, “Yep, that will do just fine”. Server then goes and requests the data from the client, and voila; It can then be served to any other peer in that location. This whole process happens within a couple of milliseconds, so it’s pretty darned efficient. The next client that then requests the same data, will get served from the Hosted Cache server. This solves a few elements that are worrying some hardcore peer-to-peer haters; There is no CPU/Disk/Network overhead on the network. Also, for the Multicast-hating folks (large Wifi subnets for instance), this eliminates the need for any local multicast messages. Win-win!
This method fixes a lot of the DP distribution issues that we had above;
- Disk space, no need to keep the entire catalog on the Hosted Cache server, just what has been requested, so free up disk space automagically. (Yes, that’s a 2Pint term).
- No need to distribute anything over the WAN before deployment time. Although you can pre-cache just as efficiently too in this scenario.
- Easy to keep your central DP’s in sync, cos they are all on a Gigabit network and you can go down in the datacenter and give them a good kicking if needed.
- If the HC server is lost, the clients can be configured to (yes, you guessed it) auto magically switch over to distributed cache mode. This is far better and easier to fix than a loss of a DP.
- No need to touch the Hosted Cache servers as part of a ConfigMgr upgrade.
- All content on the Hosted Cache server is 100% encrypted (V2) so if someone steals the server you won’t have your entire enterprise app repository floating around the dark web.
All of this is of course COMPLETELY ‘unsupported’ but as the BranchCache transfer layer is below BITS, there is no real interaction at all with ConfigMgr or BITS, so all the stats and reporting etc will just show ‘bytes from peers’ even if they came from the Hosted Cache server. A note on the unsupported-ness; I think it was mostly considered to be ‘out of scope’ to keep the complexity down for testing when the ConfigMgr team started playing with it. Also, the ConfigMgr team did a lot of work with V1 of BranchCache, in which the Hosted Cache server had some flaws and was depending on some HTTPS certs being put in place, which I guess put people off. With V2, it’s just plug and play baby.
Bonus info: Yes, the Hosted Cache server can be injected via scripted content, i.e. merge all the DP content into the cache as a migration process.
Extra bonus info: You can also Pre-Cache content to the Hosted Cache server as well as any of the peers using any of the usual methods.
Spoiler: We like Hosted Cache so much that we wrote our own custom implementation of this, with the cool feature that it runs in a workstation machine too – so in the above scenario, you could still replace the server – just have a Windows 10 machine with a big old disk instead.
Stay tuned for StifleR 2.0 baby!