Since I joined Realmac I’ve helped manage our office network, starting from a single Time Capsule and modem, to a small server cabinet with a couple of switches up to our current rack with six switches two routers and an optical terminator.
When we moved into our new office earlier in the year I set up the network and the services running internally, this is a run through of some of our equipment and the software running on it.
Previously we had just used business ADSL connections but decided to go for something a little bigger at the new premises. Originally we opted for Ethernet In The First Mile and while the latency on the line was good we didn’t have much bandwidth headroom. Now we’re running on a symmetric 30Mbps optical circuit which is plenty fast and upgradable to 100Mpbs.
We had been running the internal network on 10/100 Ethernet at our previous office but I wanted to upgrade to Gigabit Ethernet when we moved, primarily to speed up file transfers to and from the file server. So with the exception of the ground floor and some of the basement (we aren’t working in either at the moment) the new office is all Gigabit. Thankfully the new office building had already been wired for Ethernet so we didn’t have to run any cabling this time.
When we moved I bolted the switches into the rack without much thought for where they should go other than roughly in the middle. After we bought the new switches I started wiring them up to the patch panels. I wanted to avoid running cables over the front of any of the devices so that any one switch could be replaced easily in the future. I rearranged the rack as pictured so that each switch has a cable management spacer between it and the switch below. I colour coded the patch cables based on the floor the ports are on which should help with identifying them in the future too.
So that we could configure devices to use our globally routable IP addresses (we have an IPv4 /29 and an IPv6 /56), wherever they were needed in the building, I looked into configuring Ethernet VLANs on the switches. I started by putting the switch management interfaces into a separate VLAN and then changed our Xserve which is our IPv4 router and NAT from connecting directly to our upstream ISP router to connecting to it via an intermediate switch that I’ve labelled the core switch - the core switch is also the designated STP root bridge for all of the VLAN networks. We now run three Ethernet VLAN networks over the Cat5, Management, Optical and Access, tagged 100, 200 and 300 respectively. These VLANs are configured on all of the switches pictured above so that any physical port can be connected to any number of the virtual networks needed.
Most of the Ethernet ports around the building are in access mode untagged in the Access VLAN so that they’re transparently a member of the Access VLAN without requiring any additional configuration on the connecting devices. A couple of the ports are configured as trunk ports and in addition to being untagged members of the Access VLAN, are tagged in the Management or the Optical VLAN. Ideally I’d be able to use MAC based VLAN membership so that wherever I plug my laptop in my Management VLAN port membership would follow me, but I haven’t had the time to look into that yet.
Our Access VLAN currently only hosts our 10.0.0.0/8 IPv4 subnet, in the future I’d like to configure the Xserve as an IPv6 router too and have it advertise a /64 from our IPv6 subnet, so that devices can connect to IPv6 hosts natively without requiring a per-device tunnel configuration. Though, I might initially restrict the IPv6 subnet to a further VLAN making it opt-in rather than foisting a globally routable address on all the equipment connected to the Access VLAN that may be set to configure IPv6 automatically.
Our Xserve is the next component in the network, which hosts a number of services.
It’s our IPv4 router and NAT for the Access network using Packet Filter configured manually rather than the OS X Server NAT so that we can use a custom IP address range. DHCP for the Management and Access networks, DNS nameserver with records for the internal network in a .private top level domain. It also terminates a VPN so that we can connect to the internal services from anywhere when outside the office.
It’s our Open Directory master providing authentication for other services on the Xserve and the other servers on the network. The other servers also have the Open Directory master in their directory authentication search path so anyone can SSH into any of the servers on the network using the same network credentials they use for everything else. It also hosts network home folders so that as you move around the network your files follow you. I find this particularly helpful when debugging issues on older operating systems, when signing in on the various test devices we have my source control working copies are already up to date.
It’s also a file server accessible over AFP, though this hasn’t been without issue as it frequently stops respecting the ACL entries that we rely on so that files can be modified by anyone. We use the AFP server as a Time Machine backup destination for the other servers on the network and that touch wood seems to have worked well so far. The two non-boot drives in the Xserve are configured for software RAID 1 for redundancy and the directories for the file server share points are stored on these drives.
We also use the default OS X Server Apache install to host a small directory page that links to all the other internal network services, it’s a tiny static site deployed automatically from a Git repository post-receive hook script.
Finally we use the built-in Profile Server which automates setting up new devices with per-user settings. It contains the settings needed for our email servers, VPN and wireless network and saves having to set these up or pass them around by hand.
That covers the Xserve, we also run Jenkins on a Mac Pro to automatically build and run tests after commits to our source control repositories. Jenkins authentication is deferred to the Open Directory server using LDAP over TLS, and is itself only accessible behind an Apache proxy that terminates HTTP over TLS. Behind the Jenkins server we run multiple Mac minis each with a different version of OS X so that we can run the same tests across multiple operating systems simultaneously. Driving the tests is a modified version of xcoder which I forked and have made a couple of changes to for building and testing OS X based projects.
On a second Mac Pro we run a slightly modified version of gollum that supports reading committer details from the session, to host a wiki. We use thin, to serve a small rack based application, again over TLS, which checks authentication and then hands the requests to gollum. Authentication is, again, handled by LDAP over TLS to the Open Directory server and is used to populate the rack request session with committer details.
The most recent service we’ve set up is our status board and webhooks server.
The iMac on the lower right in the above image takes advantage of our VLAN configuration and is connected to both the Access and Optical VLANs so that we can give it a globally routable address, we can then configure webhook integrations with this address to receive data to display on the status boards. This iMac runs two applications both using thin as a server. The first is bound to the globally routable address on the Optical VLAN and writes data received from the webhook requests to a database which the second application reads from. The second is bound to the address on the Access VLAN interface and serves the pages displayed on the iMacs.
Having the server applications running on one of the iMacs has proved problematic as they’re liable to being accidentally turned off or disconnected from the network, so we’re planning to move the applications to a Mac mini in the basement at some point and just use the iMacs to display the data.
To keep all our server processes alive and responding to requests (including the thin process used to serve the wiki) we use OS X’s built-in process manager launchd, this saves having to install another dependency and is easy to configure.
Beyond the webhook notifications, we also have two periodic rake tasks that fetch data for the status board, these are also jobs registered with launchd with one configured to run every hour and another running every hour between 12:00 and 15:00.
My current network project is a server for the camera we have set up in the Kitchen which periodically records images. The camera can be configured to send them to an FTP or SMTP server, so my plan is to write an FTP server to receive the images and an HTTP server to sit alongside it. Hopefully I’ll have that finished shortly and can share some of the details.