Project Description
Mounting and configuring any substantial network is always a challenge, no matter which brand you choose from, planing, designing, conceiving and bring it live is always a delicate process.
When network access is the base of a company business, any changes or improvements are often not seen by good eyes, mainly due to the unknown and possible downtime.
On theese scenarios I usually go by the saying "If its not broken, don't try to fix it".
However you also get to a point that you realise that the problems with security and stability don't justify the ends. This very ambitious project was no exception to the rule.
In this scenario I was faced with a an existing internal network that was linked by 2 unmanaged switches that connected every computer on the same network segment.
Not to mention the security issues, logically was a pure disaster, DHCP was manual and gateway redundancy was made by changing network cards configuration.
Anyway, so the project consisted in a couple of stages:
- Improving Cabinet
- Centralize Equipment Management
- Improving Security
- Add Multiple Gateways
Cabinet Improvement
The company already had a 40U cabinet that was divided with 3 shelves.
Top one had 2 servers, middle one had monitor and keyboard and bottom one had another 2 servers with a UPS.
All very pretty except to the part that we could not add any more servers to the configuration, due to the existing 4 servers cases being regular ATX cases, which literally ate all the available space.
The 2U cases
So to start with, iv began to look for a server case that would fit our needs.
Due to this servers ATX motherboards, this was a bit of a mission, but soon enough I found myself looking into overseas websites (America) which easily provided the ideal solution, however prices were not attractive and then I had the customs expenses to add on top.
Reluctant to the idea of ordering from the US I started diggin deeper in the internet and eventualy found these guys in the UK from http://www.servercase.co.uk/ that supplied a 2U rack case model (in the pictures) that supported ATX power supplies and regular ATX motherboards. It all seemed to good to be true, but fortunately, in this case, it was a clear hit!
These cases would offer all of the above and more, support for three 3.5 hard disks and 4 internal fans with high air throughput were included.
Also, with a bit of help from the ServerCase sales guy iv ordered some rails (in the pictures) for the cases, definitely worth the extra money, server access is way more easy.
And all of this at a fraction of the cost from ordering overseas.
The 2U Case Caveats
The units soon arrived and match the expectations.
A couple of caveats of using this type of case, is the fact that most power supply units came with an ATX cable that is not long enough, mainly because the motherboard is placed at the back of the case and the power supply at the front, this means that the cord has the come all the way from the front to the back of the case.
I easily solved this problem by ordering ATX cable extenders from my local supplier.
Also another problem is that most regular pc power supply's nowadays come with a fan at the top and not at the back as they used to a couple years ago, this means that they are not compatible with the 2U format simply because there is not enough room at the top of the power supply for it to "breath", therefore I had to order power supplies with back ventilation in order to bypass this problem...
Moving Structure
The next thing I took into account was the fact the once the cabinet was full, it would present a big problem to move it around in case it was necessary or even to access its back, due to the confined space it "resides" on.
Spent some time looking into some sort of professional solution that could solve this issue, but it was a fruitless effort...
Due to that I decided to take the matter into my own hands and have a structure built that could accommodate the rack and its weight. I've quickly draw the rack base shape and took some measurements, it became clear that the structure would to have to be rock solid to sustain all the equipment weight.
For 30€ I bought 4 wheels (on the pictures) and for another 45€ I had the steel structure made, painted with a special ink that wont allow rust and the wheels attached from a local locksmith.
The end result was a rock solid structure that allows me easily move the cabinet even if it is full.
Centralize Equipment Management
One of the problems with the network was the fact that the 2 switches that connected both companies were inside of wall cabinets, which if you ask me, should be the place for them if you are not on the floor of the main rack.
However in this scenario, this presented 2 issues, number 1, terrible management access and number 2 if the power went down on the switch, computers on the network would loose connectivity.
So the solution was to migrate the switches to the main rack, and put patch panels in their place, but unfortunately this meant rewiring all the network into the same room... And that was quite something!
So after a lot of hours spent on passing 1300 meters of cable through the office and wiring RJ45's we finally managed to divert all of the ports into the main enclosure. Now the switches are located in the main rack and protected behind a UPS.
Improving Security
One of the aspects that was overlooked, was in deed the network security.
As mentioned earlier, all the machines weather they belong to the accounts, the production or the sales department, used the same network segment.
This cause several security issues mainly because everybody could access the same resources and there was no limitations for any ips to communicate with server machines, leaving these computers vulnerable to inside and outside threats.
The solution here, was clearly to use VLAN's isolating traffic by organisational units as seen below:
(PICTURE)
With this network scheme in place we can now, more efficiently, restrict access to sensitive information, defining rules based in source and destination of the packets and control accesses.
Also one of the main goals was to provide wireless access to the customers without exposing the internal network, this is now possible due to the 802.1Q protocol and the Radius server which, depending on the login, places the machine in the appropriate VLAN.
Multiple Gateways
Internet access is a must for both of these companies, their business relies on that, but unfortunately every now and then access is unavailable.
To address this issue the company has a secondary link that uses a different provider.
Before the change over, making network computers use this secondary link when the internet failed was a big problem, we used to have to go individually in the computer settings and change the gateway to the other link, proving this a very inefficient system, not to mention the work load and the technician availability to make it happen.
Now it can be accomplished with a simple route rule in the Cisco router which can easily be applied manually or by a script.
The Link Failover Automation
Initially this was suppose to be automated, but its not all the time that the theory works in practice and in here this was the case!
When we programmed the routers with "tracks" that monitored internet connectivity by pinging google servers every 30 secs we found that sometimes they were not reliable enough, throwing false positives that resulted in network convergence all the time, which caused instability and connectivity problems.
Others suggested to ping the ISP gateway instead, it seemed to work well for one of the providers that allowed pings to its gateway, while the other offer no response to such request, making this approach unreliable.
Dont forget that the main issue with a system like this is to detect weather the internet access is really there. You cannot monitor ports for "shutdowns" or ISP routers for presence, because if the problem is somewhere along the line your ISP router will continue to work as it if nothing happened, so tracking internet in a situation like this is a problematic issue when you cant afford much downtime for network testing purposes.
We have decided to leave this on manual, meaning that if in deed happens the routing rule has to be applied by someone.
However a little application with a GUI was built to hide all the cisco "geberish" and easily allow the gateway changeover.
Final Notes
Overall was quite an experience, passing from a lab environment to the actual scenario and making it happen.
Yes it took a while to plan down and program every step of it, but is defently worth all the headaches if a stable and reliable system is needed.
Project Information
Categories: Networking
Project Timeline
Start: Nov, 2014
End: Jan, 2014