VirtualBox VM with both Host (Host-Only) and Internet (NAT) access

I am setting up a Linux VM so that I can use it as an LDAP Server in an Okta test environment.  I’m going to add the Okta AD Agent and keep the LDAP server in sync with my dev Okta environment.

To do so, I needed a VM, let’s call it the Guest OS, to be able to be reached by the Host OS, in my case a Macbook, and also have the ability to contact the internet to sync any LDAP changes.

VirtualBox has many different types of Network Interfaces and after researching it, I determined that I needed a NAT interface so the Guest can reach the internet, and a Host-Only interface, so the Host could communicate with the Guest. I’m using VirtualBox version 5.1.14 on a MacOS.

Interface 1: Host-Only

Interface 2: NAT

Host-Only Interface

However, what happens when you setup these 2 Interfaces, is that when you start your VM, in my case Ubuntu Server 16.04, it can’t reach the internet:

Running ifconfig shows that it doesn’t see the NAT interface (enp0s8), just the Host-Only interface (enp0s3):

I’ve found 2 good ways to activate the NAT interface:

  •  For a temporary solution that has to be run on each server reboot, you can run the command: sudo dhclient -v enp0s8
  • If you want to fix it permanently, you can do it by adding an entry for enp0s8 to the /etc/network/interfaces file:
  • run sudo reboot and once you login again, you have access to the internet:
  • ping google again:
  • running ifconfig, you can see that enp0s8 is now active at boot

Now it’s time to script an LDAP setup and user creation with OpenLDAP.

Evasive Data Needed for Basic OpenID Connect with Azure AD

I’ve been working on a basic app to configure OpenID Connect with an Azure AD that is in my personal tenant.  However, I ran into a few roadblocks I had to work through along the way that could be useful to others.

First, just converting the Out-Of-The-Box Visual Studio C# MVC app to use HTTPS was throwing errors with my local IIS Express (TLS 1.0, 1,1, and 1.2 errors using IE and “Could not Connect” errors when using Chrome.  It turned out that this was due to an issue with Visual Studio 2015 when I installed it a few months ago.  Simply going to Programs and Features and doing a repair on IIS Express resolved that issue.

The next challenge involved plugging in the Azure Tenant ID into the OWIN configuration as the Authority.  Determining what the Tenant ID was turned out to not be too simple.  What you’re looking for is what to put after “https;//login.microsoftonline.com/”. It boiled down to 2 options.

  1. Using your AD Domain (*.onmicrosoft.com). You can determine that by going to your Azure AD and looking at it’s domain, something like “clownshoes.onmicrosoft.com”.
  2. From your Ad Domain, click on App Registrations and the Choose Endpoints.  The GUID is embedded in each of the urls.

Your resulting Authority URl will look something like “https;//login.microsoftonline.com/clownshoes.onmicrosoft.com” or “https;//login.microsoftonline.com/9defdsdf-6345-32cr-193c8444444a”

One final note, “https://login.microsoftonline.com” has replaced the deprecated “https://login.windows.net” and you should use it exclusively going forward.

Here’s the results with a displayed claim of the first name

image

Beginning OpenID Connect/Oauth 2.0

At my company, we had become proficient at doing SAML to federate with various SaaS apps.  We had a tool that no one had ever heard of, but it did SAML really well.  When teams came to us with new SaaS apps they wanted to federate with, we basically approached it with the mindset that, if it isn’t SAML then they don’t know what they’re doing.

That all changed with the emergence of custom development both in the cloud and on mobile devices (even custom development on Windows 10 machines).  SAML didn’t fit.  We are now using Okta as our IDP and the first foray away from SAML was doing WS-Fed with Office 365.  Worked smoothly, and works great, as long as all we ever want to work with is O365 and Azure.  It didn’t help us on-premise or with the other use cases mentioned above.

That brings us to OpenID Connect which gives Identity capabilities to OAuth 2.0.  We now know that we need to become experts in this framework and understand it’s flows.  The fact is, not every developer is going to be familiar with it and, if we don’t want to spend every waking hours supporting any app using it, we need to know what errors will manifest so that we can walk developers throught the issues they face.  It won’t be easy, but now that Okta offers it, we can handle all the different types of mobile and cloud use cases we face going forward.

I anticipate that I will be sharing much of what I learn here.  Here we go!

I’m an Architect, I’m not an Architect

tumblr_oea9y5Ics11sfie3io1_1280

I’m a Security Architect.  Before that, I was an Enterprise Architect and before that a Solutions/Applictation Architect.  But I was none of these things.

Before that, I was a Systems Engineer, but I wasn’t that either.

I always knew I wasn’t exactly those things, it was an attempt to put labels on a new industry by matching up to roles in centuries old practices.  In Building MicroServices, Sam Newman makes the same case.  He argues, successfully in my mind, that we are more City Planners than Architects of Engineers.  We should be creating Zones instead of prescribing what color to paint the breakroom in each building.  We can makes sure they don’t put a sewage plant next to the outlet mall, but needn’t concern ourselves with the precise mix of cement in the Nike store.

Where we DO need to get involved, is in the interactions between the zones.  How does the retail zone communicate with the banking zone?  How does the Credit Check service communicate with the Address Validation.  We can say that you should use REST/HTTP between zones, but needn’t mandate that they user Java, .NET, or Node.  That’s not to say that we will accept 10 different languages, as we do at least need to support it when the consultants go away, but there can be some flexibility.

Now, time to go play some SimCity and bill at the same time.

You Can Always Do More

For our current large implementation, we intentionally pushed back our implementation by 3 weeks, not because we were running late, but to put more time into verifying and testing more things, because this implementation affects most of the employees in the company.

We went and wrote corporate comminications, wrote scritps to do automated tests, worked with other teams, communicated again, and had daily standups.  We were feeling pretty good about our chances.

After all that, the day before our implementation, IT leadership pulled me into a meeting and asked me a succession of questions about the implementation that we hadn’t considered!

I’m taking a couple of lessons from this.

First, treat this as a good thing.  Document the feedback that can be used in the future.  Some were good suggestions.

Secondly, find a way to keep upper management over-informed.  Tell myself that a message isn’t received until it’s been heard 7 times.

And lastly, perfect is the enemy of the good.  We did all that we could think of and you can alway find more to test, but you have to use your experience and skills and attack the most likely failure vectors.

Here we go……

Finding Whitespace

I listen to a lot of podcasts, many of them about self-improvement.  It’s one thing to listen and another to advice and ideas and another practice what they preach.

Many talk about creating margin to get creative.  However, if I’m always listening to the next prodcast whether I’m driving, running, working in the yard, I never get that whitespace in my life.

I finally figured this out in the dentist chair yesterday waiting for the numbing gel to take to have a cavity filled.  I figured out at least 3 things for the project I’m on that goes live Saturday.  I had no paper, no pen, so when I was done, I grabbed a contact card and borrowed a pen from the receptionist and scratched out my notes.

If I hadn’t figured these implrementation steps out prior to go-live, we would have been in danger of rolling back.

How do I keep this going?  A few time a week, I’ll turn off the podcast on a run or a commute and just be, maybe listening to music or just silence

Ethical Crossroads

I had a situation where I was accidently copied on an internal email from a vendor we are negotiating with. 

At first blush, I figured this could be valuable information and I should share it with my boss.  But then I started thinking about what this means for our long-term relationship with said vendor.  Our negotiations with this vendor have been win-win to this point and a deal that tilts dramatically in our favor could change things.  I decided to consume the information for myself, but not spread the email.  I can’t say I’d make this decision every time, but I did feel better after choosing that path.

Postscript:  After reading the email in more detail later in the day, it turned out that there really wasn’t that much content to swing the deal either way in any case.  So maybe that’s validation for my decision.

Becoming Paranoid Enough

My biggest struggle as I moved from the Application Development / Enterprise Architecture space was becoming as paranoid as my colleagues.  I understood how Authentication/Authorization worked, the theories behind Multi-Factors, and the need to Encrypt.  I also knew enough about code libraries for Java and Microsoft that I knew I could make our apps just as secure with those techniques as I could by managing a giant, monolithic, on-premise Web Access Management system.

 

However, for various reasons, I let it go and now, 3 years later, we are finally being forced to change to the Code Library technique due to Business Requirements. 

 

That’s fine with me, and I welcome this second change to do it right.  This is my Waterloo and I intend to ensure we stay this course.  It’s never too late to do the right thing

Using Puppet to Automate Infrastructure

DevOps – Automating Infrastructure

I’ve been putting a lot of time and thought into the relationship between Development and Operations. Particularly the void between what Dev thinks is their responsibility and what Operations thinks is their’s. In the middle is that actual deployment, configuration management, etc.
This led me to Puppet. I first heard about it at JavaOne 2011 (along with Chef, Fabric, and cfEngine). We have a lot of Wintel at my employer, and I assume MSFT has a similar tool, but I have not been able to verify that System Center Configuration Manager meets the needs I’ve outlined.  In any case, it would likely have an extreme Wintel bias.
Puppet works in a client-server model. The server (puppet master) runs as a daemon on the host and contains the configuration information that should be on each client.  Agents reside on these clients and connect back to the master to see if there are any configuration changes to apply.
I’ve gotten server only tests to run and complete to create files and stop and start services.  I now have a VM that can act as a server so that I can test more sophisticated Use Cases.  I’ve found the documentation on the Puppet site to be excellent and have been suplementing it with the book “Pro Puppet”.
I see many uses for this where I work.  Looking at the capabilities, I believe we can get to a point in our Configuration Management capabilities that it would be a fail to actually have to log in manually to servers to install or verify conditions.  This is how many companies have environments where an individual server admin manages 1000+ servers.