© Miles Chatterji
Creative Photographer & I.T. Infrastructure Engineer based in Los Angeles.
Hi, I'm Miles, an I.T. Infrastructure Engineer by day, and Creative Photographer by night, based in Los Angeles. This site is intended to advertise my creative works as a portfolio, however can be used to learn more information about my professional I.T. career as well. Use the navigation on the left to check out my recent works in the portfolio tab, or click the updates tab to learn more about my work and personal life. A PDF version of my current resume is available on the right, should you be interested.
Phone:317.721.7858
Study:B.S., I.T., Indiana State University
Resume:Click to View Resume, PDF
I started my professional Journey in 2009. After being enrolled in a Science program at ISU in 2005 and owning/operating a shaved ice stand for nearly 7 years, I was able to get a job working for Steadfast networks in Chicago, IL. I hadn't yet finished my degree and was chomping at the bit for this job, as well as being distracted by the dollar signs that came with it. After 6 months I was feeling overwhelmed. I was thrown into the fires of the fast-paced world of web hosting, assisting the demands of customers from networking to database administration inside of all flavors of Linux, and Windows OS's.
A bit too much for me at the time, as well as expensive. It turns out that $40,000 even in 2009 wasn't quite enough to make the best living in Chicago without having several roommates, even so, saving money was a hard thing to do. So I decided to return home to southwestern Indiana and try again.
I briefly worked at a hospital upon returning and quickly realized that I didn't like it. The mentality of the IT department was old school, and they had trouble adopting newer practices. I decided to finish my bachelor's and look for employment elsewhere. Luckily the University that I was attending was hiring for junior-level employment, and I already had a rapport with them as a skilled student worker for the infrastructure team a couple of years prior.
I was hired into an infrastructure generalist position and was able to use all of the skills I had acquired in the real world, only this time at a more comfortable, and affordable pace. I managed several projects for them, including creating a homegrown honey pot to thwart bots hammering RDP and SSH ports, as well as implementing some Linux patch management (Spacewalk with CENTOS and RedHat Satellite) and lower-level VDI deployments. Soon after graduating, I started to itch for advancement that wasn't available at the university at the time. I started casually browsing, and interviewing but couldn't grasp anything exciting. One day, a former assistant director of my department at the university got a hold of me and asked me to come and meet with the leaders of a local ISP & Consulting firm. We hit it off and a new chapter began.
After meeting with the head of Joink Technology Solutions Group, I was offered the position of special projects associate. I was also toying with the idea of graduate school, and they were willing to let me be flexible to attend classes during the day if I needed to. Grad school didn't pan out, for reasons that are beyond the scope of my work history, but let's say I was learning more in the real world, and the professors in my programs weren't really up to speed with the modern demands of technology. They were teaching MS Access in 500-level courses in 2012.. yes you read that correctly. I was given exposure and autonomy to create entire ecosystems from the ground up with Joink. This was an experience I was truly grateful for, the hours were long and tough, but the pay and experience were top-notch.
My greatest project was creating a VDI experience through VMware view, for an entire school system. This included everything from bare metal ESXi and vCenter installs and clustering to selecting products to deliver applications. It also included things that I hadn't had much exposure to and wasn't as comfortable in. This entailed working with the school system to obtain complete employee data from an IBM A/S400 to build an AD forest from the ground up including populating the employee data so that they would have logins for the VDI platform. As well as working with 3rd part hardware vendors of physical lock access databases to be synchronized with AD so that the same ID card to unlock doors, would be available for print release jobs out of the VDI environment. The whole thing was contained in a fresh built data center, on Cisco UCS C-series servers, loaded with disks, configured in vSAN for fast application performance and delivery. The platform unidesk was selected for application delivery (since purchased by Citrix, and was the only good solution for this at the time.) Application packages were built and deployed to a selected elementary school pilot, and NVIDIA grid cards were used in the vGPU beta program (at the time) coupled with teradici PCoIP CPU offload cards, to ensure that each of the teachers VM's would perform google earth with full graphics detail and minimal lag. Networking between schools was sufficient with a new 1 Gb fiber link provided by Windstream, with cisco gear at the helm of each location. DHCP was put in place for the new thin clients at switches at each school, and DHCP for the VM images was handled with the Active Directory services, as well as profiles and logins scripts set to DFS shares to be loaded on login so that they could change their desktop images for a more personal experience. Overall the project was a success, and I also was exposed to my first DR scenarios for other customers that included fiber networks, with FCoE and last-mile vendors to supply complete DR backup sync, across metropolitan areas of about 70 miles.
Over time the stress and long hours got to me. I decided to look for something slower, and as luck would have it, there were higher-level positions back at the university. After 2.5 years of long weeks, long nights, and not much free time I decided to make a move back to the university.
Back home at the university. I was greeted with open arms, slightly higher pay, and more responsibility. While not as exciting as my previous work, it was something to be proud of. I managed the entirety of the virtual infrastructure and Storage Array Network. I made some nice achievements:
I got to learn a little more on the admin side this time by being given autonomy to work with data center operations to consolidate hardware and move production to newer G8 and G9 HPE BL460 blades and using the older G7 blades as clusters for experimental projects with licenses obtained through VMUG. Cutting the Test/Dev clusters support from 24/7 to 8/5 as we could lose several hosts and their failures could be addressed with support during business hours, (we didn't have and test/dev SLA agreements at the time anyway.) drastically cut costs that ultimately helped us buy newer storage arrays through the reduction of licensing costs. The old and tired HP EVAs (4, of varying models and capacities) where needing replaced and I was able to determine 2 EMC Unity 400 hybrid arrays would fit the budget, lead to lower IO latencies, and poise us to use newer technologies like VVOLs for object level storage, which would help delegate some storage duties to other admins, as well as speed up the aging Commvault snapshot-based backup system. 2 EMC Unity 400's where placed in different data center locations, with both volumes local to the data center, as well some such as the blackboard and Ellucian data being replicated synchronously between arrays. I learned how to manage zoning via the command line through cisco MDS equipment as well as the Flex10 HPE switches for each of our HPE c7000 blade chassis. Fiber Channel was new to me, but learning the magic of fiber login (FLOGI) and how traffic was managed with buffer-to-buffer credits was something that helped me in my current and hopefully future endeavors as well.
Alas all good things come to an end, this was a great experience however local politics dictated my departure. The university was struggling with dwindling enrollment numbers and the 1% annual raises weren't enough to keep up with the local struggling economy. The property taxes on the house I had purchased 4 years prior almost doubled, causing the escrow balances of my mortgage to nearly double to keep up with the rising taxes, which the 1% raises weren't coming close to keeping up with. I brought this to the attention of the fantastic director that I reported. He was a big champion of my work over the years and was honest and straight with me when he said he couldn't do anything and would understand, even help me find high-paying jobs if I needed to leave. I didn't have much of a choice at this point. I began to pack my things as interviewed for new jobs, knowing that I would have to relocate.
I interviewed with my current employer, originally as a VMware admin. This is more of a catch-all title to infrastructure admin, as it had to do with all things hardware, AD, VMware, Storage, and managing Linux installations and Red Hat Satellite once again. The first 2 years were a lot of VMware work moving hosts, VMs, and services from a closing primary data center (CDI) to Databank. This was also mixed with several Automation pilots for VM and other service deployments to be handled through an internal web portal. This ultimately died as we started moving everything to AWS, and picked up again with some Terraform and ansible deployments to AWS. While I am to be a part of that in the future, currently my tasks are lead storage engineer, which fell in my lap after the former and only storage engineer quit, leaving me the only one with any fiber/zoning/array experience to take over the role. Currently managing 2 EMC VMAX 250f's, 2 Pure Storage M20's, and 2 Isilon H400's as well as the brocade switches between them, and the dark fiber connectivity between data centers provided by Zayo. Backups through Avamar Gen4s, DDboost on a couple of DataDomain 9300s come with the job as well. Managing these has been quite rewarding, internally, however. Most of the company has no idea that Isilon and array failovers have occurred, which of course is the best way for them to know. Most of the work as of late is helping with the move to AWS through deleting old unused LUNs and expanding current ones for admins to clone staging and production environments, in prep to then clone them into the AWS realm. Currently working through training for AWS Solutions Associate, so that I may continue to maintain and monitor storage and suggest third-party storage solutions to maintain current SLAs and continuity after the move is over. I hope to then pick back up with Terraform and Ansible with APIs to make an optimal home in AWS as the needs of the company change.