Federal IT Policy Recommendations: 2021-2024

2020.12.18 – This article is part one in a series on IT policy recommendations. A PDF of the full recommendations may be downloaded here.

Executive Summary

The work improving technology in government through policy initiatives over the last twelve years has been very successful, however there will always be more work that needs to be done. Today, there are several key steps that the Biden Administration could immediately address and work on over the next four years to continue to build trust and drive maturity in technology across government to “Build Back Better” — not just at the Federal level, but state and local as well. These steps include:
  1. Renew the Commitment to Open Data & Transparency
  2. Focus on Outcomes, not Box-Checking
  3. Drive Customer Experience & Human-Centered Design
  4. Solve Identity Once and for All
  5. Increase Attention to Small Agencies and
  6. Manage Risk through Security
I’ve spent the last ten years working on civic tech from local to Federal levels, inside and outside of government, and have been excited to see incredible gains in the government’s ability to deliver services to constituents. After the Obama Presidency, the work to drive innovation in government didn’t suddenly stop — the Trump Administration pursued an aggressive agenda of IT Modernization. This included a major effort to update a very large amount of outdated government technology guidance, laying the critical foundation for many modern technology practices and ideas. From 2017–2019, I served in the Office of Management and Budget (OMB) in the Office of the Federal Chief Information Officer (OFCIO), where I worked on the new Federal Cloud Computing Strategy, ”Cloud Smart.” I designed this strategy to drive maturity across the Federal Government by updating a variety of older, interrelated policies on cybersecurity, procurement, and workforce training. At the time, we had no idea that many of these initiatives, such as the update to the Trusted Internet Connections policy (TIC), would be critical to enabling government-wide mission continuity during the COVID-19 response just a few months later. From the past 4 years spent in government, I have been able to see many opportunities for improvements that did not get as much attention as they deserve. What follows are a few policy areas that I believe would build trust and improve service delivery to the American people. These aren’t all major innovations, but these efforts are needed to Move Carefully and Fix Things.

1. Renew the Commitment to Open Data & Transparency

Before joining the Federal Government, I spent years working for government transparency organizations including the Sunlight Foundation and the OpenGov Foundation. Although those and many other transparency organizations have shut their doors over the last four years, the need for transparency has never been greater. However, I no longer hold the naive belief that sunlight is the best disinfectant. As it turns out, disinfectant is a better disinfectant, and regularly putting in the work to keep things clean in the first place is critically important. Transparency is an active process, not an end in and of itself — and care will have to be given to rebuilding some of the atrophied muscles within government.

Share Data on the Fight Against COVID-19

First and foremost, to heal the country a new Administration will need to deal with not only the COVID-19 virus, but also the disinformation virus. To do so effectively will require addressing public trust around information quality and availability. The Administration should focus on providing timely, accurate information including infection rates from Health and Human Services (HHS), job numbers from the Department of Labor (DOL), housing data from Housing and Urban Development (HUD), and loan data from the Small Business Administration (SBA). By utilizing the new Chief Data Officers across government installed as part of the Open, Public, Electronic and Necessary, (OPEN) Government Data Act signed into law in 2019, the Biden Administration would be able to gather and centralize the critical recovery data. Everyone loves shiny dashboards, but I would instead propose that sharing raw data to allow independent analysis would be vastly more valuable than Yet Another Dashboard.

Revise the National Action Plan

My work on the Fourth National Action Plan for Open Government (NAP4) — and the challenges the Trump Administration faced in delivering this plan — are matters of public record. As we look towards the Fifth National Action Plan, it will be critical to improve engagement with the public and open government groups. Since most of the country has quickly become accustomed to remote collaboration due to the pandemic, I would recommend hosting a variety of virtual forums beyond the DC area to maximize input and idea-generation outside of the beltway. In addition to bringing in more stakeholders from across the country, this would also aid in empowering grassroots-initiated activities towards anti-corruption practices as well. I’d also recommend starting this process as early as possible to develop and gain traction around high-quality, ambitious commitments. There are also more than a few initiatives that civil society has proposed over the last decade that are worthy of reconsideration, including these from the NAP4.

Revise Agency Open Government Plans

As part of this work, OMB will need to update the long-neglected Agency Open Government Plans guidance, which has not been revised since 2016. Although most agencies have updated their Open Government plans since then, more ambitious efforts to publish data are needed. Notably, the Department of Veterans Affairs (VA) have not updated their plan since 2010, even though more scrutiny has been paid to them by Congress during this time. The VA Inspector General also previously identified that the VA had been actively working to undermine efforts to measure their progress on improving patient wait times, as a result of simply not recording data on the topic. With the new, $5 billion Electronic Health Records (EHR) system being implemented today, it is even more urgent that the VA improve their transparency. However, all Federal agencies should be directed to more aggressively and proactively publish data, instead of just as a response to Freedom of Information Act (FOIA) requests. Throughout the Trump Administration, key datasets have been removed from government websites. The new Administration can both better tell its story and also build confidence in the American people using government services by working to restore key data and increasing the volume of information that is actively shared.

Rebuild The Office of Science and Technology Policy

The Office of Science and Technology Policy (OSTP), headed by the Federal Chief Technology Officer, was previously the center of open government work under the Obama Administration, but this office and its authority were dramatically reduced over the last four years, with staff cut from 150 to less than 50. As a result, major reconstitution of OSTP and other offices will need to be done to drive these efforts.

2. Focus on Outcomes, Not Box-Checking

Narrow Oversight Focus to High-Impact Projects

Transparency goes hand-in-hand with oversight. The Office of Management and Budget is the primary oversight organization within the Executive Branch (other than Inspectors General), and is organized into smaller domain-specific offices. Staff in these program offices act as “desk officers,” focusing primarily on the 24 large CFO Act Agencies. For smaller offices, a* *single individual may be tasked with oversight of several agencies’ billion dollar budgets. OMB’s OFCIO is one such smaller office that has been stretched thin in this oversight duty while having to simultaneously fulfill a variety of policymaking roles. However, the primary role of this office is to oversee technology implementation across government to ensure the success of projects. Given the few remaining staff, rather than being stretched thin on meaningless compliance, these resources could be better spent primarily focusing on only the top five or ten major technology projects in government and making sure that they do not failin the way we saw happen with Projects such as the State Department’s passport & visa modernization, the Department of Veterans Affairs new EHR system, and other similar initiatives could greatly benefit from closer scrutiny. By investing in hiring subject matter experts with skills in technology and managing massive projects, the government could save taxpayers billions of dollars while simultaneously improving services. OFCIO should also collaborate closely with the Office of Performance and Personnel Management (OPPM) which oversees the Customer Experience initiative across government to make sure that these projects also meet the needs of the American people.

Restore and Expand The Office of the Federal Chief Information Officer

Moreover, OFCIO shares its limited budget with the U.S. Digital Service’s (USDS) core operations via the Information Technology Oversight and Reform (ITOR) Fund, which was slashed dramatically under the Trump Administration. More than just paying for staff salaries, this fund is used to fund a variety of key technology oversight projects, such as the government’s software code sharing initiative, Cuts to this fund have caused OFCIO to eliminate programs like, which monitored and evaluated the maturity and security of agency websites. Moreover, this fund is flexible and can be used by OMB to fund interesting technology initiatives at other agencies. The new Administration should restore the ITOR budget. It would also be useful to further supplement this fund by taking the step of working with Congress to set appropriations to ensure the future of OFCIO and USDS. Like OSTP, OFCIO has experienced large setbacks. The constant budget cuts and toxic culture have decimated the office, and most of the talented & passionate subject matter experts I served with have since left. Reversing the course on this office, and investing in hiring experts with practical experience in technology in government — not just Silicon Valley thought leadership solutionism — in these offices and beyond will be critical for the success of Federal IT for the next four years. This will improve both the quality of policy that is created as well as the outcomes of IT projects governmentwide.

3. Drive Customer Experience & Human-Centered Design

Historically the government spends hundreds of millions of dollars on major IT projects. However, very little work is typically done to make sure that the right thing is being built — or if the right problem is even being solved. And sadly, newer systems are not always better systems. However, initiatives on Human-Centered Design (HCD) — a process to engage service recipients as stakeholders in the design and implementation of those services and systems — that were started under the Obama administration were built upon over the last four years. For instance, common private sector practices like user research and testing were previously considered difficult in government because of review & approval requirements under the Paperwork Reduction Act, but using streamlined processes and blanket-permission requests these barriers have largely been eliminated for most agencies. These efforts need continued attention and support to maintain the momentum.

Drive Commitment to Human-Centered Design Across OMB

At OMB, the Office of Information and Regulatory Affairs and the Performance & Personnel Management office worked to institutionalize much of this work over the last four years, including new governmentwide Customer Experience (CX) metrics guidance and a related Cross-Agency Priority Goal as part of the President’s Management Agenda. These metrics should be considered table stakes for driving customer experience, and much more work must be done in this area. For instance, every major (and possibly even minor!) IT project should have CX metrics defined as part of its requirements, and these should be tracked throughout the life of the project. For existing projects, these should be created retroactively — starting with the highest-impact public-serving systems — with adequate baselines so that agencies don’t just receive an “easy A.” The recent General Services Administration (GSA) Playbook on CX may provide a great starting point for most agencies.

Fix the Definition of Agile

Of course, this customer experience work is not a new idea — in fact, this sort of Human-Centered Design is a core tenet of Agile software development. Unfortunately, the Federal Government has completely missed the forest for the trees on the principles of Agile, and almost all law and regulation focuses entirely on one area: incremental development, delivering software in small, working chunks over time, instead of delivering a full solution at the end of a lengthy development process. However, the real value of Agile is not in these small chunks, but rather in regular testing – both automated as well as having actual members of the public using the service directly involved in the development process to give feedback as the project progresses. In this way, teams can make sure their software works and is actually solving problems for people using the service, instead of assuming what the people served want. In the private sector we joke that you’ll have testing either way — would you rather do it before your product launches when you can get ahead of the issues, or after when it’s a public embarrassment? Currently, agencies are required to report on their major IT investments and state if these projects are developed “incrementally,” defined in guidance at the depressingly-low rate of once every six months. OMB could refine their guidance to add additional Agile characteristics, including the requirement that software is tested throughout the development process with real customers. This alone would dramatically decrease the number of failed projects in government, saving potentially billions of dollars.

Fund Great Customer Experience

However, all of this work requires expertise to be done well, and expertise comes at a cost. Champions such as Matt Lira have called for the creation of Chief Customer Experience Officers (CXOs) within agencies, which would be an excellent next step. However, we must not repeat the mistake of the creation of the Chief Data Officer (CDO) roles, where additional funding was not dedicated for these new roles or their staff – as a result this became yet another hat for the CIO to wear at most agencies. Agencies will need to have increased funding in the President’s Budget to both hire new CX experts as well as to fund contracts to support these efforts CX efforts government-wide.

4. Solve Identity Once and for All

Accurately verifying a person’s identity to satisfy Federal requirements, as well as creating a secure environment to allow them to login to Federal websites & tools, is a difficult and expensive task for all agencies. This also remains one of the biggest challenges for both agencies and the people accessing government services today. Most agencies have multiple login systems, each specifically tied to an individual service and without sharing information. For instance at the Department of Veterans Affairs until very recently there were nearly a dozen different login systems. Each of these systems would require you to prove that you are who you say you are separately as well.


Meanwhile, the GSA’s is an easy solution to this problem, and has been an overwhelming success for many agency services, including USAJobs, the website for most Federal job postings and application processes. provides a simple solution to the very expensive problem of checking the identity of a member of the public and allowing them to login to a government website or application — to receive government benefits, register their small business, or any number of other services. This identity-proofing step is typically the most expensive part of the process, requiring the use of independent, private data sources like those used by our national credit bureaus. With, once you’re verified on one site you’re verified at them all, so the cost for taxpayers is dramatically reduced. Although some agencies are starting to move to this platform, a new administration should mandate all agencies must use, and require them to provide a transition plan to this service within 5 years. In fact, usage of is already required by law, but the law is simply not being followed (6 U.S.C. 1523(b)(1)(D)). Instead of just an unfunded mandate, the President’s Budget should include a request for Congress to provide appropriations directly to GSA to fund these efforts to ensure this product is sustainable well into the future.

Use USPS for In-Person Identity Proofing

At the VA we also learned that many people have trouble with identity proofing over the internet for a number of reasons, including problems with having suitable cameras for capturing information from IDs, issues with people’s memory that preclude standard address verification methods, and other issues. However, we found that people were much more likely to be successful by having their identity validated by humans in-person at VA hospitals. The US Postal Service (USPS) has successfully piloted a service to check people’s identity in-person at both USPS locations and at people’s homes using their existing portable tablets used for mail delivery. By working with Congress to help fund this service, identity verification could be a solved problem, while also providing a sustainable additional revenue stream for the desperately-underfunded USPS.

Share these Services with State & Local Governments

Moreover, these services should be offered to state and local governments, who are incredibly eager for these solutions, coupled with the expertise of the Federal government. For instance, the same login that you use for USAJobs could be used to login to your local DMV, once again making government easier and friendlier for everyone. To date, GSA leadership has not actively allowed sales to these governments, even though it is explicitly allowed under law and other similar services have been allowed, such as The White House should direct GSA to provide this service to any government agency who wants it — and even to the private sector where appropriate! Recent bills in Congress have also prioritized security for state and local governments, so it would not be unreasonable to go even further and work with Congress to set appropriations to provide this identity service to them as well. Working closely with the Cybersecurity and Infrastructure Security Agency (CISA), GSA could turn this from a small project into a national program.

5. Increase Attention to Small Agencies

There are nearly a hundred smaller independent agencies that are not situated under the President’s Cabinet, and as a result they are largely ignored. However, they still have critically important missions, and these agencies also interface with the bigger agencies to exchange data, presenting a number of potential security concerns and operational risks. Although a focus on projects and outcomes — not just compliance — is critical, OMB needs to pay more attention to these smaller agencies. For instance, the U.S. Securities and Exchange Commission is a small independent agency of only 4000 people, but is tasked with protecting investors and the national banking system, as a result of the stock market crash in the 1920s. As such a small agency, they don’t have nearly the budget for IT and cybersecurity of the large agencies. However, since they exchange data with the Department of the Treasury, they act as a backdoor into the larger agency. This sort of attack, by exploiting a softer target to gain access to a more secure one, is extremely common on the smaller scale and will inevitably become a focus for hostile nation-states in the future.

Fund Small Agencies’ IT

These smaller agencies will need additional resources to be able to deal with these threats while also keeping their services up-to-date. OMB can take the much-needed step of** requesting larger IT budgets for these agencies.** Furthermore, to date no small agencies have been selected for Technology Modernization Funds — a “loan program” for agencies to fund IT projects — to help them improve their IT. Meanwhile massive organizations such as U.S. Customs and Border Protection (CBP) — who have an annual budget of 17 billion dollars *and are not in any way short of money — have received an *additional 15 million dollars from this fund to update their legacy financial systems. Providing access to further funds for smaller agencies would give them an opportunity to improve their systems.

Drive Shared Service Use

Shared IT services are even more important for these agencies as well. In many cases the Chief Information Officer (CIO) will wear many hats — acting as Chief Information Security Officer (CISO), Chief Data Officer (CDO), and other roles. To be successful while being stretched so thin means that staff must take advantage of the capabilities of the bigger agencies to help them fill their gaps, such as the Department of Justice’s Security Operations Center-as-a-Service offering. The idea of a “CIO in a Box” for the smaller agencies has been brought up several times, providing information, services, and resources to these organizations. However, very little movement has been made on this initiative and this is a large opportunity for further work and investment. Other shared services, including the aforementioned and also would provide major benefits to smaller agencies, especially if the President’s budget included additional dedicated funding to GSA for these projects for small agencies, so that they don’t have to scrape together the money out of their own limited budgets.

6. Manage Risk through Security

The common theme here is that cybersecurity remains one of the greatest challenges for technology in government today. The Federal Information Security Management Act (FISMA) sets many of the legal requirements for cybersecurity in government, and in practice this has transformed risk management into risk avoidance, reducing the overall risk tolerance for agencies and freezing any interest in trying new things. There is little hope of Congress fixing FISMA in the near future, and the attempts to date only will make things worse. In the meantime, the Biden Administration could supplement ongoing initiatives for security automation with additional resources, and implement the resulting best practices as official policy governmentwide.

Continuous Security Authorization of IT Systems

At the center of IT security in government is the Authorization to Operate(ATO) process. If you’ve ever worked for the government, I’m sure you groaned just having to read that phrase. FISMA requires that for all IT systems, agencies must implement a series of “security controls” — measures defined by the National Institute of Standards and Technology (NIST) to enhance security. Now, this is an extremely laborious process, and a new product may take months to meet the requirements of a security review. This process generates a lot of paperwork — enough to stop bullets, but this isn’t very effective for keeping out nefarious attackers. Many agencies only have a three-year cycle of re-assessing products for these security controls — basically only checking to see if the door is locked once every few years. Moreover, the interpretation and implementation of these controls differ wildly between agencies. Several agencies have started separate pilots to improve the consistency and speed of this process. For instance, some agencies are working to implement a “lightweight authorization to operate” (LATO) or a “progressive authorization to operate” process where only a subset of the security controls must be reviewed to begin developing on a platform, with further controls added along the way before launching the application for public use. Others are moving to “continuous authorization,” a concept similar to continuous integration for software testing, by using standard tools to automatically check the various security controls on an ongoing basis — providing real-time visibility to the security of the systems. Still other agencies are working to standardize security plan language, or use natural language processing (NLP) as a means of reviewing paperwork-heavy controls faster. These efforts also relate to NIST’s efforts to standardize controls via a machine-readable structure called OSCAL, which is now being used by GSA’s FedRAMP program. Some of these efforts were previously being replicated via the CIO Council, but with the exodus of OFCIO staff efforts have stalled out. These efforts should be spread across government via additional funding, staffing, and more pilots.


These are just a few of the policy areas that need attention in technology in government. There are still other agency-specific projects that need further attention that I haven’t covered here. However, these specific areas of focus will continue to build back better technology in government, and equip us with the necessary tools for the next decade or two.

Read This

Hiring and Programming Tests

2020.12.11 – I don’t mean to call anyone out, but I want to talk about programming tests in hiring and contracting. Stop doing them. This post originally appeared as a thread on my Twitter account. I’ve reposted it here for posterity with additional context. I understand that people want to assess whether or not someone is capable of doing the job at hand. The problem is that every engineer, no matter how senior, is learning on the job, EVERY DAY. All of us lookup solutions on the internet, all the time. Your assumption of what should be “baseline knowledge” for one person is based on your own experience. If you learned programming in college, a bubble sort algorithm is probably just muscle-memory by now; for the rest of us, it’s completely irrelevant knowledge. It also has nothing to do with 99.9999% of jobs. Unless you’re working on some bleeding-edge optimization for a major, that sort of fiddly info is way less relevant that principles and practices. It’s just a great way of showing your biases towards “traditional” candidates. The worst interview a company ever gave me was to implement a hashing algorithm. I’ve been writing code for 35 years, I’ve never had to do this manually. I failed. It also told me the company didn’t know what to ask me, I immediately withdrew my application. Tech isn’t about algo knowledge. Bad news to my younger-self: it’s also not about passion. It’s about knowing how to solve problems. Instead of coding exercises here are a few ways to figure out if someone will be a match for the job you’ve got. In short: don’t ask questions that have a single, expected answer. If you know what the answer is, you’ve created a knowledge-based test - and just like with authentication, you’ve created a bad one.

  • Ask them about a problem they’ve solved recently, have them outline the steps they went through.
  • Ask them about a piece of technology that has made their life easier in some way. Whether it’s a build tool or kitchen gadget, how does this relate to what we do?
  • Ask them what new tools they are excited about, and what makes their life as a developer better? (I specifically avoid saying “fun” here, because working for a paycheck not passion is a perfectly valid reason to be a technologist.)
If you want to hire senior developers:
  • Ask them about building for audiences that are not like them? How do we create solutions for inclusion?
  • Ask them about why they chose to not use a piece of technology. How did they evaluate it? Cost, maturity, stability, community?
  • Ask them what makes a good culture for a team. What makes a team productive and empowered?
  • h/t @justgrimes: Ask them to explain how the internet works. Details don’t matter, you want to see if they can read a room and communicate to the level of their audience.
  • My favorite EQ question is “what’s the last good thing you’ve read.” Book/blog/T&Cs/etc. I always add the answers to my reading list - but also high EQ people will usually reflect the question and ask you back. (Unless they’re nervous, so DON’T go on this alone!)
Here are a few other ideas I received on Twitter:
  • via @abbeysuekos: “I’m also starting to get into questions about how people work with those above them (managers/leaders/executives) and those below them (ICs). There are never enough ?s about how you treat people with less power than you… I love questions about projects that didn’t go well. Plenty of good candidates will struggle or even get defensive, and that’s a flag for me. Important to handle the framing carefully and create an explicit safe space, but when done well it’s golden.”
  • via @HimalMandalia: “Had an interesting experience with a tech test years ago. Turned out it was one I’d done already, so told them and said “how about instead I show you and talk through what I’m working on at my current place?” They agreed. So did just that. Got the job.”
Do you have other questions you use to help identify talented candidates, without jumping through programming test hoops? Tell me about it here!

Read This

Welcome Home

2020.11.09 – I see many of my former technology-colleagues now suddenly eager to return to government- or join for the first time- and I’m very excited to work with you all again! That being said, here are a few thoughts from someone who stuck around for the hard parts over the last 4 years.

  1. “Move fast and break things” failed. As a result, we inherited a lot of fast-moving broken things. Sustainability is the most important principle in government tech today. “Move carefully and fix things.”
  2. “Build with, not for” - Cuán McCann (that one is still the most important and gets its own tweet) Note: Cuán’s talk at CFA Summit in 2014 begins with “At the risk of creating a massive existential crisis…” and the following five minutes created one for me. It completely changed how I look at the world and approach The Work.
  3. Technology is almost never the solution to the problem. You need a deep understanding of culture, policy, budget, acquisitions, etc. to be successful. We don’t need ANY more shiny new websites or hackathons. Your first year should be spent understanding the systems.
  4. Fam, choose boring tech over shiny. Those mainframes and COBOL still work just fine after 50 years of service. Those React apps you’re writing are legacy before they launch, at a hundred times the cost, and no one can run them when you leave - making them abandonware.
  5. Government doesn’t need disruption, or even innovation. Many of us who came in as “innovators” are now the career bureaucrats just keeping the place from burning down. Listen to our expertise and work with us.
  6. People don’t want to hear this, but… this isn’t a job for tourists. Building relationships to cause change takes time. If people know you have one foot out the door, they’re not going to trust you. Think about what you’re willing to sacrifice before signing up.
That all being said… Welcome Home. I’m looking forward to collaborating with you all soon.

Instructions for the Zine

Read This

The Secret Equation of Job Satisfaction

2018.12.16 – Many lengthy books and articles have been written on how to be fulfilled at work, good management, and keeping your staff engaged and excited. However, I’ve found one simple equation that is the secret to all of these. Satisfaction = Volition / Friction If you’re not satisfied with your job, it’s probably because you don’t feel effective or that your work has the desired outcome – your volition, or because there’s too much resistance, bureaucracy, barriers, and day-to-day minutiae for you to enjoy the work – the friction.  Satisfaction is a measure of your volition over the job’s friction. More volition increases satisfaction, as does less friction. Let’s say that your control over your job is around 5 on some magical undefined scale – you can do the things you need to some of the time. However, the amount of paperwork you have to fill out to do the job is around a 10.  5/10 = ½, so you’re probably not going to be very satisfied. On the other hand, if you have lots of control and can make all the decisions you need – let’s call this 10 – and you have no daily design-by-committee meetings to wrangle – let’s give this one a 2, you’re at 10/2 = 5, which is looking pretty good. You don’t really need the scale or the points here, it’s just to show how the two relate to each other. The ratio of volition to friction will determine your level of job satisfaction. Research suggests that people feel most fulfilled when they are being challenged just beyond their current capacity. Too little challenge or too much challenge become either boring or overwhelming. Although being able to direct your work is the most obvious component of volition, it can include many supplementary factors. The alignment of your skills and background to your work can impact your volition. Feeling that your work has an impact on the world can also be a major element. Friction is comprised of several components as well. If your organization has unnecessary processes and procedures, those barriers to productivity will cause friction. If you work more than 40 hours per week or your commute is long that will likely cause friction by cutting into personal time. If your organization requires extensive reviews and buy-in from multiple stakeholders to accomplish tasks that may cause friction. Anything that causes you to become quickly exhausted at your job is likely adding to the friction. To increase your job satisfaction, you’ll need to increase your volition in the job or decrease your friction. To increase your volition you can take many steps in your current job.  This includes outsourcing or delegating unwanted tasks, taking on different projects that you enjoy more than your current projects, learn key information or skills, increase the level of challenge by taking on harder projects, decrease the level of challenge by collaborating with experts.  To dramatically increase your volition you may need to find a new job. To decrease the level of friction there are many things to do as well.  You can outsource and delegate tasks to decrease your weekly hours. You can work from home several days to decrease your weekly commute time.  You can decrease the number of weekly meetings with your teams. Any of these can make the job feel easier even if the workload hasn’t decreased. This isn’t to say that there aren’t terrible jobs in the world that no one would ever enjoy. You may be in a job where neither of these variables can be changed enough to make a major impact on your happiness. It’s a good idea to regularly assess where you’re at on this scale just to make sure your satisfaction isn’t slipping away. If you’re familiar with Agile methodologies, this should sound somewhat familiar. “Individuals and interactions over processes and tools” is a core tenet of the Agile approach. Putting the individual in control and reducing the cruft of processes is just good practice. As a manager, I often use this equation to help increase my staff’s satisfaction as well. I may not be able to always give a staff member more control over their work, but I can drop half of our check-in meetings to free up their time, or find ways to reduce the amount of paperwork they have to file to do their job. However, in many cases I can do both at once – by allowing staff to act independently in their projects with less oversight. This brings up another equation that I’m fond of: Trust + Autonomy = Delivery In general, if you hire good, talented people, all you need to do is trust _them to do their job and give them the _autonomy they need to do it, and they’ll deliver good work. (Buckminster Fuller talks about this in Operating Manual for Spaceship Earth, in describing the “Great Pirates.”) Giving your people the space they need to execute and support they need to see their work through will build trust and autonomy. Best of all, trust can increase volition, and autonomy can decrease friction, resulting in delivery and satisfaction. Each one of these components can be broken down into various topics for in depth study.  The high level equation helps identify what the missing component is to getting the desired results.  I hope you’ll keep this equation in mind when you’re approaching your own work, and see if you can find ways of creating a better balance.

Read This

The Hype Market

2018.10.24 It takes time, money, and other resources to execute on any new information technology initiative. In theory, there should be a return on investment for any new IT development. However, in the current market landscape, a lot of effort is being wasted on reinventing the wheel or misapplying solutions, rather than driving towards proving greater value to citizens and customers. The relatively low cost of experimentation and implementation of processes result in a fetishization of abstraction, leading to further and further complexity even if a proportional benefit is not achieved.

Technology is a Bridge

Most businesses and government services aim to connect a person’s intention to an outcome. Whether we’re helping someone call a restaurant to make a reservation, or helping a Veteran to understand their benefits, we’re aiding someone to solve a problem. In short, we’re bridging the gap between a customer and a service. Technology is introduced into the process of delivering a service to a customer to automate one or more steps, it effectively shortens the gap. In general, technology is not seamless in this process – that is, there are almost always steps between the customer’s intent and the resulting action that have to be filled through human intervention or additional services. You can’t think “pizza” and have a pizza instantly appear  – but an app to order pizza removes some of the burden of calling the pizza place and having them take your order, prepare it, and deliver it to your doorstep. Today there are new automation tools available to practitioners who want to create access to services through technology. If you want to create a service so that someone is able to order a pizza, you can set up a website or a mobile app to do so, because there’s already computers, and the Internet, and web browsers on the computers, and protocols for everything to talk to each other. So for anyone trying to solve a problem, the real value is in making the process simpler or easier through further automation – say, having your website be able to actually show the order automatically at the pizza place on a screen, or automatically send you a text when the pizza is out for delivery with an ETA. Based on the previous diagram, the goal is to expand the boundaries of technology to further fill the gaps between the customer and the service. However, that gap is not where the technology industry focus most of its time and money. Software engineers are spending more and more of our resources mucking about with tools that are simply thinner and thinner slices of micromanagement around already-solved problems. In practice, the industry is just creating additional layers that are largely unnecessary for the end customer. Here’s a real world example, again using a website.  In 1998, it took a few hours to set up a website. Most of our time was spent on designing it, then hand-coding HTML to post on a webserver.  As our tools evolved, the time to perform these steps decreased, but the complexity of the tools themselves has increased dramatically. Today, it can take ten times as long to create a basic website as it did decades ago. After twenty years, technology has moved a little closer to the person requesting the information due to improved User Experience, and the end user can get to the information a little faster and easier, but mostly there’s lots of additional complexity in the middle layers. These middle layers have created a very rich market of expensive services and skills based on these bleeding-edge technologies.

Blockchain: A Parable

As a result, the world has seen the rise of numerous solutions with overblown promises of impact, fueled by speculation of venture capital. Blockchain (or distributed ledgers) is a popular example, which has elevated itself with an innate brand of “value” due to being associated with Bitcoin, a popular cryptocurrency using the technology. Most computer systems take data from one person, put it into a storage location (usually a database), and then output it for someone else’s review. One problem that people have with these systems is that there’s no way to validate that the data submitted by the first person is the same as what the person on the other end receives. There are numerous points along that path where a malicious actor – or broken process – can modify or corrupt the data. That can be a hacker exploiting a vulnerable system, data corruption due to failed hardware, or any number of other failures. The submitting person could also just be lying about the data, or the person receiving it could lie about the results. Companies selling blockchain as a solution make the promise that these points of failure can be eliminated by having an authoritative record – or ledger – of all changes to the data. However, in reality, blockchain fulfills only a small piece of this process. In most cases, it simply replaces the database with another type of database, which provides additional checks. This still doesn’t solve the original problem of ensuring that the data is valid on both ends. To visualize this, let’s say I have four apples.  I write down on paper that I have six apples. You read the paper and double check that I do in fact say that I have six apples. You have no proof I ever had six apples, only the promise that I did.  You could then go report that I have eight apples and that you double-checked that fact and you promise that it’s true. This is how blockchain gives a false sense of security.

Addressing the Market Forces

And still the sales pitch still works!  Companies are spending millions to adopt “blockchain” technologies – many of which are so stripped of features that they can’t even be called a distributed ledger anymore. But even if purchasers don’t know the difference, technologists must know that much of this additional complexity is merely hype. Why would software engineers want to use it in the first place? One way to consider this is that engineers obsess on generating higher and higher orders of abstraction. Abstraction, however, generally increases complexity by moving concerns to a higher order function.  Think of going from a wheelbarrow to a bicycle. Bicycles can reduce the amount of energy needed to move a load around due to gears – but those gears are also more complex than just an axle and wheels. Suddenly you have extra parts to keep up and grease, and to build them from scratch you’d need to understand the math behind gearing ratios. With physical processes, resource limitations and cost-effectiveness become natural restrictions on automation. But in software, there are rarely such restrictions; rather, software developers are actively encouraged to create more complex tools, often as “side projects” outside of their main working hours – though frequently eschewing any hope of a healthy work-life balance. These developers and tools are given prominence on stage at most technology conferences, becoming superstars of the technology world and further adding to the hype. Very few people garner attention for solving common problems with common tools. (E.g., there are four major competing Javascript bundlers today, while make, written in 1976, can do just about everything.) To look at this cynically, extra complexity means higher cost.  Technology as a business is largely about hype, the newest and coolest tools are a selling point for most tech-reliant industries (which is most industries). The rise of outsourcing overseas, to areas where costs are much lower, continues to drive down the cost of technology development in the west. But using all of these rapidly-evolving technologies increases the cost to develop, so engineers can demand higher salaries.  It’s only by continuing to evolve the market of solutions to newer and trendier alternatives that the system can maintain its value, otherwise everyone would all settle on a low-bar minimum baseline and scaffold from there – like using dumb “feature” phones with mail and chat features, instead of smart phones. But that wouldn’t allow anyone to maintain the higher prices. These solutions are, as a result, extremely short-lived. Newer model iPhones come out annually.  And a new major Javascript framework establishes market dominance on the web every 18 to 36 months or so. Although at the time of writing we are in the height of React’s reign, we’ve already seen that market splinter and fragment, adding new layers within React, including Redux, Sagas, and others. Every piece of software released today is the legacy technology of tomorrow. This market churn makes it difficult for any new engineers to enter the workforce, as any skills learned in a developed curriculum will largely be out of date by the time the student graduates. Only full-time software engineers are able to compete, and they must dedicate a considerable part of their time to staying current, lest they fall behind and become unemployable.  Though, this will eventually give way to a secondary market of expensive engineers specializing in legacy systems, as seen during the Y2K panic. The Federal Government also continues to provide a welcome market for outdated technology skills.

Moving Past Hype Through Accountability & Simplicity

The hype cycle and automation cult isn’t going away anytime soon.  But we all – government and private sector alike – can work to be more cynical about the technologies presented to us, to dive deep on the topics, and look to experts to inform our opinions.  We can demand proven, reliable solutions with longevity instead of just the newest buzzwords. And we must plan for the inevitable economic downturns in the future that will impact the IT market, like we saw in the “dot com” crash of the late 1990s.  There will always be money to be had in IT, but the bubble must burst (again) eventually. Another thing we can do to combat this trend is to stand up for simplification; using the easiest – and most well known – solution will often be the most cost-effective.  This will inevitably erode the market, driving down costs and associated salaries, and reduce technology practitioners to skilled tradespeople like plumbers or electricians.  In this way, we can plan for a more stable future technology market, with far greater longevity. Through this dedicated effort and planning, we can focus our efforts on those areas of greater value instead, creating a more seamless Customer Experience which actually solves problems for people more efficiently and cheaply, instead of just making increasingly impressive profit margins for private companies.

Read This

The Myth of the Flat Organization


“Floor 1997–2000” by artist Do-Ho Suh

Over the last few years, an increasing number of technology companies and agencies have adopted a flat hierarchical structure. This organizational system promises to improve the efficiency of organizations by removing management structures, leaving as few management layers as possible. I believe that it is also almost entirely imaginary.

Since the popular game company Valve published its employee handbook online in 2012, companies have looked to the text for ways to improve their own processes. Growing companies stopped hiring middle level managers, project managers, and other critical roles in favor of letting engineers manage themselves. This was a very attractive move for smaller organizations, since it reduced the number of positions they needed to hire for, reducing their operating costs dramatically. For engineers it can be initially compelling to be able to set their own priorities amongst themselves — and potentially get an inflated title such as “Lead” or “VP” (but often with no additional compensation).

Management is best done by managers

However, in practice, it’s rarely so simple. By serving dual roles on a team, performance often suffers as staff try to context shift. Good engineers do not automatically make good managers — working with computers and working with people require very different skillsets. A company wouldn’t expect a business manager to write code, so why should the reverse be encouraged?

“Flattening the org chart just means creating a hierarchy of emotional labor”
Steven Reilly

In a flat organization, it is no one’s dedicated job to handle many of the complex human interactions that a business must handle. At one organization, I found myself staying up late nights, writing human resource policies, vacation plans, and codes of conduct. From project management to conflict resolution, functions that are filled by dedicated staff in other businesses frequently become after-hours extra labor by engineering staff. So-called soft skills and human-oriented problems are treated as secondary to achieving product goals.

It may be necessary for staff to serve multiple roles initially for very small organizations to be able to function, but beyond the first dozen staff this is not a practical or ethical way to accomplish tasks. If you’re large enough to be thinking about insurance plans and snack delivery, you’re large enough to hire management. A good project manager or human resources officer is a much better investment than that a lavish office space will be.

Even in large companies, it’s very common for women to be expected to perform administrative duties outside of their roles, such as taking notes or scheduling meetings. In flat organizations, it’s very common for many of the “extra” tasks to be assigned to women and minorities first. The inherent cultural biases in technology only increases the odds that these groups will be expected to do more than their share. It’s also common for more junior staff to be assigned the less exciting work, for instance being assigned to fix bugs instead of writing new code.

Hierarchies form anyway, and unfairly

“A flat org just replaces vertical hierarchy with concentric levels of inner circleness… If you remove formality you can avoid accountability and responsibility with policy of openness that is a convention of silence in practice.”
Ozzy Johnson

In many organizations that aim to be flat, a hierarchy emerges based on social cliques and personal relationships, instead of an officially established order. The ideas of staff who spend time socially with owners and executives tend to be adopted more readily. In many organizations I’ve worked for, I’ve frequently seen talented staff passed over for promotion in favor of friends of company owners and managers.

People, by nature, surround themselves with like-minded — and like-cultured — individuals, creating echo chambers and consolidating power in in-groups. This almost always puts women and minorities at a disadvantage. In a structured organization formal policies on hiring and process can help to prevent the biases and inequalities that come from such in-groups, but a flat organization has no such defense from becoming a good ol’ boys club.

It has been reported that this was the situation at Valve as well. Since there is no official hierarchy, there is often no way to call out the favoritism that comes with these factions, and rarely any formal process for resolution. Even when group-based decision making is a part of the process, individuals outside of the power centers tend to speak up less, adding to the asymmetry.

Although I was initially very excited about the prospect of flat organizations, I have yet to work in one that was effective. Modern business practices and the laws that govern them have mostly evolved to help create a more level playing field for employees. Although tech culture fetishizes rule breaking, disdain for authority, and meritocracy, these only contribute to the very toxic culture that has evolved. For the good of our industry, we could all do with a few more rules — and a few more managers.

Read This