Seeking challenging technical leadership opportunity managing large scale data systems. Diagram, TDD, and CI/CD addict. Love untangling epic git disasters. Python / Golang / Perl / DBs / APIs / cloud. 100% telecommute.
Leadership: For seven years in the early 2000s I managed 8-12 programmers and database/system administrators in a mission critical OLTP database environment with multiple application stacks. ($400M revenue stream). Daily management of operations while simultaneously managing high level expectations and deliverables for senior management and ownership.
Technical: 30 years of programming / lead developer experience. Strong data modeling skills. Sustained focus on the construction and maintenance of TCP/IP services (over "trusted" and untrusted networks). Large-scale JSON / XML standards data processing and REST APIs exposing those. Developed multi-site load balancing, high availability, and disaster recovery solutions; including procedure definition, enforcement, and trials. Strong focus on open source technologies. I have championed documentation efforts for many companies.
Telecommuting consultant. Python, Golang, Perl, PL/pgSQL, large scale databases, REST APIs, minor web. Industries have included bioinformatics, AI (LLM) start-ups, e-commerce (retail), advertising technology, large event / venue equipment rental (and data mining thereof).
Recent clients have me doing a lot of DevOps work to fill staffing gaps. Strategies have swung back and forth between monoliths and micro-services. Stacks variously driven by Docker/Compose, Terraform, Chef/Ansible, Helm Charts, and other tools.
Worked as staff augmentation consultant for REST API team and associated services. The company employed ~140 engineers. Our primary mission was to design, maintain, and enhance a PostgreSQL schema and REST APIs for modifying data in that schema, primarily for configuration of advertising campaigns.
Primary PostgreSQL: 273 tables, 75 views, 234 functions (mostly PL/pgSQL). ~500GB, 80M rows.
API v3 stack: Golang 1.20.5, gin 1.9.1 web framework. 67K lines of tests, 116K total lines of code. Stack built locally (development) via Docker Compose (DB and application). IDE: Visual Studio Code plus custom linting rules (revive). Continuous integration via CircleCI. If tests all pass, topic branches deployed automatically to in-house Kubernetes cluster for both QA and PROD. Fully automated continuous delivery via Helm Charts. Prometheus integration for statistics. Production alerting via PagerDuty. Log aggregation via SumoLogic, including Sumologic Traces (in-app OpenTelemetry hooks).
API v2 stack: Perl 5.30.1, Catalyst 5.90128 web framework. 98K lines of tests, 232K total lines of code. Stack built locally (development) via Docker Compose (DB and applications (main API and ancillary applications)). IDE: Whatever the developer prefers. Visual Studio Code and vim were both common. Continuous integration via Jenkins, which also served as our "continuous delivery" mechanism to pre-allocated in-house clusters of QA servers (18 sets). Production deployment automated and integrated into Slack, sending status updates for ~15 different phases of production rollout. All hardware pre-allocated, dedicated (not dynamic). statsd integration for statistics. Production alerting via PagerDuty. Log aggregation via SumoLogic. No OpenTelemetry.
core-serializer: Perl 5.34.0, Python3 (Dockerfile "latest"). 12K lines of tests, 17K total lines of code. Stack built locally (DEV and QA) via Docker Compose (DB and application). Continuous integration and delivery via CircleCI. If tests all pass, production deployed automatically via Chef recipe updates and published via in-house Chef Manage server (chef-client on dedicated hardware).
In 2020 Golang was chosen as an API v2 replacement (Perl), API v3 expansion began in earnest. We re-implemented core functions, driven by next-generation UI layer requirements. I coordinated with product owners and stakeholders as needed to design and deliver those solutions. I wrote reams of documentation for those systems, with countless diagrams. I emphasized solving problems once since our historical one-off ad-hoc solutions continued to be inconsistent and extremely expensive (time, resource consuming) over years and multiple generations of product, leadership, and engineering staff.
The APIs also serve as orchestration layers, back-ending requests to other services (APIs and others). We spent lots of time and effort developing and maintaining integrations to those systems, hosted by other MediaMath departments, and many third party vendors.
For three years the company went through an AWS microservices phase. We split large chunks of the above DB and APIs out into AWS EC2, RDS, ECS, and a dozen other AWS services. Eventually AWS proved too expensive ($3M/month), and several of those microservices were re-absorbed back into the above services. I designed, developed, and tore-down services on both sides of those transitions.
I served as an informal part-time trainer / mentor for several new hires. I upgraded thousands of lines of abandoned (yet business critical) Python2 into Python3 and modernized that software stack. I proposed high-level overhauls of systems that were causing long-running inefficiencies in business operations.
B.S. Bioinformatics
Part time employee (stipend) for GSAF/CLAB. See work history. Earned no degrees.
Mechanical Engineering, Psychology, Philosophy
Psychology minor completed. Earned no degrees.
See work history.
Available upon request