Even now – after all this time – passwords are hard.
Yes, I know we should all be using Cloud with its fancy KeyStore/Key Vaults or enterprise store such as CyberArk. We all know we should not be chosing them. However, like it or not ‘things’ still need passwords. Whether its for an encryption key, wireless network or just a plain and simple password.
Vendors are also part of the problem, placing restrictions on types of characters or even positioning of them within a password. Still. In 2020! (VMware and Cisco are pretty horrific at this)
Also when you are building stuff in automation, its inevitable you will need some sort of password generating. I use PowerShell a lot and found methods that could be used to do this.
This is based on using the Psuedo Random Number Generator (PRNG) which is a feature provided by the Windows Cryptographic Service Provider (CSP) which generates randomness in a ‘good enough’ manner. If you need more than this then well you know why and what to do about it but for most uses this is pretty good.
It uses vendor safe methods and arrangments out of the box with a pretty good 16 character length. You can append flags to make it stronger, output a number of them or make them readable. Its up to you. Your choice.
# Import the module
PS > import-module .\Password-Functions.psm1
# Run the function
PS > New-Password
So now we have discussed the issue of Public IP addresses as well as concepts around Azure ARM templates.
Next step is to remove their use. Its really quite a simple process of removing entries from both the parameters.json and template.json files.
Consider the Cisco ASAv deployment from the previous deployment.
Looking through the values a number of items relating to the devices Public IP address can be found. Luckilly Cisco used the string public to make things simple to find.
They set things like Public IP sku (Basic), Allocation method, the DNS name along with the address object name.
These values are referenced within the template.json file. Again looking at the ASAv version an excerpt is shown below
The first section is used to describe the varibles for interactive deploymentsor if they are not specified within the parameters.json or overwridden during deploment.
Further down each one is described as a resource. This is the main section that defines the underlying Azure resource. Consider the snip shown below showing a public IP address name object, along with the Azure ARM API that is called to create the object.
Lastly, towards the bottom of the file is an area where the public IP address is bound to the network adaptor. In this case its Nic0 the ASAv’s primary interface.
I’d love to tell you there is a magic way of editing this but sadly there is not. Using a little bit of search/delete my method is to:
Back all files up
Identify in parameters.json any entry relating to the Public IP address.
Search for those names within the deploy.json file.
Remove the entries and the regions around them, remembering to keep the json formatting.
Delete the entry from the parameters.json
Repeat until entries are removed.
Test a deployment, verifying using the Azure Activity Log for your subscription
I have successfully created these for the following infrastructure devices:
Palo Alto NG Firewall
CheckPoint NGX Firewall
Pulse VPN appliance (used to be Juniper)
RSA Authentication Server
Ill add some links to some before and after examples, however be aware that the templates as well as Azure API’s could in theory change.
Good luck. As I say its not easy however is very possible.
For those who don’t know ARM, this is Azure Resource Manager. It is used for everything in Azure to submit changes, additions and deletions to Azure infrastructure. These are all made via a common API which takes the changes, validates, queues and reports on any change. Regardless of if you use the Azure Portal, Cloud Shell, API, PowerShell or other tools they all use ARM.
Lets take a look.
First of all visit portal.azure.com and authenticate to your subscription. Next click in the master search box at the top and type ‘Activity’ and select the Activity Log
You may then need to change your filter applied to see jobs or tasks that have occured.
Each item can be examined an a JSON representation of the job and actions can be seen.
For any deployment a set of files are required for deployment. When using the Azure Portal, these are created by the web pages and then used to deploy as a Job by Azure Resource Manager.
You can see this just before you hit deploy
Then at the bottom select the ‘download template and parameters’ link
This will display a template page (ill do another post on this) however for now select download.
This will download a ZIP file to your local machine. Save and Open it. From an example I deployed the following files were created
The key files are the parameters.json and template.json files. These are mandatory. The others are scripts and helpers to run the deployment task. (.PS1 for PowerShell/CloudShell, .sh for Bash, .rb for Ruby deployments etc.
All of the settings for the deployment are contained within the parameters.json file. Open it up with a suitable editor such as vsCode or Notepad++
As you can see, variables are well named and obvious and being is JSON format its easy to check its well formatted. Values can be changed and as long as they validate against your exisiting constructs (for instance subnets and vNets) then it should be OK
A typical organisations deployment usually will have existing infrastructure deployed such as Azure Firewalls, Load Balancers and Virtual Network Gateways (VPN termination) to front access into IaaS VM’s, App Services and other systems.
Larger organisations will have policies (or should!) around governance of configuration within their cloud deployments. Smaller ones should at least consider it. Common practice is to safeguard admins from creating things such as new Azure AD deployments in each subscription, deploying oversize or very costly VM’s, adding new vNets or even NSG’s. These can all be controlled with appropriate Azure Policy and AAD Roles.
A common policy is to stop admins from creating new Public IP addresses. Why do you need them when you have existing Express Route conections from your corprate sites, or other methods of connectivity. You can effectivly bypass layers of security controls and services by simply provisioning a new device. Hence its a sensible control.
This poses an issue however. Most 3rd party appliances from the store require a public IP address to be created. For instance the Cisco ASAv, Palo Alto NG Firewalls, F5 BIG-IP (although they have recently recoginsed the issue and make some templates availible without) and other services.
Consider the diagram below showing the relationship of key Azure constructs and components with a deployment.
It shows how a VM has components associated with it such as disks and Network Interfaces. The Network Interfaces are connected to a subnet, which is part of a Virtual Network (vNET). The Network Interface is also associated with a Public IP Address.
However, there is an important concept in Azure to grasp. That is that any service deployed in Azure with a public facing IP does not have knowledge of its public IP address. Even though a Network Interface may be associated with a Public IP address the host has no knowledge of it. Not its external IP address, routing, nothing. If it has a default route set, it will be via whatever subnet it is connected to. Unless it has a User Defined Route which allows an override of Azure routing.
The vendors do not make it easy to deploy devices without a Public IP address. In fact they may even state its not supported, although this would be difficult to prove. It is very possible to build devices this way. Luckilly, Azure and its fantasic ARM templates comes to the rescue.
Ill describe this in follow up posts (coming soon)
One thing I have been working on for a long time is to create a set of PowerShell modules for Cisco ACI.
For those that don’t know ACI is Cisco’s Next Generation DataCentre switching fabric. Kind of the next step after Nexus in NX-OS mode. It uses modern tecnniques such as hardware controllers (APIC’s – C220 servers) along with Nexus 9500 and Nexus 93xxx switches to form a leaf-spine deployment.
Whilst ACI has a GUI, MO and NX-OS like CLI (not perfect at all !) along with Python and even Ansible, these quickly run out of steam. Whilst Python is a great language it still takes some learning. Its also not so intuative when passing results between methods.
I realised there is nothing for ACI to utilise PowerShell in a similar way that VMware’s NSX has the most excellent PowerNSX
Hence, my first cut of ACI-PoSH published to GitHub. There is a ton of work to do on this – documentation being a biggie – but it works.
I am quite lucky to have been involved in some large scale ACI deploments however when offline from these I have two enviroments that I use.
The Cisco DevNet ACI Sandbox availible at https://developer.cisco.com/site/sandbox/ This is a site that contains working examples of most Cisco products, one being the ACI Simulator (Always on) which in an Internet connected APIC. No charge but you do need to register or login. Just the job for testing, learning and development.
The Cisco ACI Simulator (see here) You need a valid CCO account and ACI software support to download. It will run in most Hypervisors (I use VMware Workstation and ESXi) but need at least 80GB HDD, 8 cores and 16GB RAM availble. You will also need your friendly Cisco Account Manager to authorise the activation code
Ill be adding more info here in later posts about just how to use it.
Earlier this year, I decided to do some Microsoft Azure upskilling. As part of our Microsoft Partner status, we get Azure credits along with other benefits. We decided to re-platform our services and using Azure made sense.
It can be over whelming when first accessing Azure. There are so many services, products and features it can be hard to see the wood for the trees! One advantage is there is so much information availible out there, however Azure changes so frequently it can become out of date.
I was going to work towards the Azure Microsoft Certified Professional (MCP) then Microsoft Certified Systems Engineer (MCSE) however I learnt one morning that Microsoft was going to change their certification schemes. I also learnt the new exams were in Beta AND there was a 80% discount for the first 300 applicants (worldwide)
I managed to bag both the AZ-100 and AZ-101 exams, with only just under one month to take them. There was also no study guides as such, other than the exam synopsis. Luckilly, I had been studying against the MCSE track and there was some overlap….. Lots and lots of studying however.
In August I took the exams not knowing if I had passed or not. Microsoft only release exam results once the exams go public. A week or so ago, i suddently had an email saying congrats on passing both of them. I was pleased as it made the hard work worthwile.
So its another certification for the bag. Not sure if Ill continue, but suspect I may or do one of the Amazon AWS exams to keep neutral 🙂
Its been a long, long time since I blogged. Lots changed and lots to update on.
One major thing is I finally bit the bullet and dropped Windows Mobile 10. I was a long time user of W10M/8.1 Phone/8 Phone but I accepted fate and realised I am missing out on functionality, security, apps and tools. Whilst I still think Windows 10 is brilliant, I do use occasionally an Android tablet. The flexability and app store is very impressive, plus there s no way i’m ever going back to Apple.
So having done some research a while back and nearly going for a Samsung Galaxy 8 (v v expensive) then being curious by OnePlus, I preordered a OnePlus Five.
If you don’t know about them, check them out at https://oneplus.net/uk/5. Its bloody brilliant.
The OS is bang up to date, is fast and fluid. No crapware on there and the camera is great. Battery is brilliant, screen clear and nice and light. Well designed and I must say I’m seriously impressed. Ignore the crap about the ‘jelly effect’ as most of it comes from the fanboy sites and is hype over nothing.
More updates soon.
I use a HP Microserver N40 for use at our office. Its a great peice of kit that I have had for a number of years and is used for a variety of purposes such as NAS, Media Sharing, Print/File Server and Virtual Machine host…
I recently decided to upgrade from Windows Home Server 2011 to Windows Server Essentials 2012 R2, as well as some hardware upgrades.
Sadly i found that Essentials 2012 R2 does not have a driver availible so support for this RAID card is not possible. One thought was to use the Server 2008 R2 X64 driver but sounds a bad idea to me. I then decided to look at using Windows Server Storage Spaces which is a technology Microsoft have been working on for a long time. I had 2 x 4TB WD Green and 2 x 2TB WD Green drives in the 4 way SATA bay.
Not wanting to install the OS on the 4TB drives, I decided to make use of a spare SSD. This was then connected to the motherboard SATA port.
I then realised there is an E-SATA port on the rear of the server. With a bit of a Heath Robinson hack I managed to get this routed internally and then into a spare hot swap 3.5″ bay I had spare that I installed into the CD Drive 5 1/4″ bay. The mod was to bend the left edge of the flap that covers the PCI slot screws outwards with a multitool which allows the E-SATA cable to be routed internally.
A quick addition of a SATA power splitter and job done. Six drives in a four drive server !
I need to make sure recovery will work storage spaced going forward but the Heath Tools in Server Essentials seem to to quite good so time will tell.
Looking good so far….