30 Months (and 8 years) of HumTech

Giulio Coppi
18 min readJan 2, 2021

--

In 2012, I turned my old tech passion from a side occupation into my full time job. I gave up an open ended contract as generalist aid worker to launch myself in the humanitarian technology space as consultant, researcher, and founder of a (now closed) platform for open source humanitarian technology.

In 2018, I moved from New York to Oslo to join the Norwegian Refugee Council (NRC) as Global Digital Specialist. NRC operates in over 30 countries across 4 continents and at the time I was the only full time data and tech expert for the whole Field Operations section, so the title is quite accurate.

As we approach the conclusion of this wild year, also incidentally marking my 30th month on the job, I feel it’s the right time to share some of the insights I gained so far, the things I loved witnessing, and those I found most frustrating\enraging.

After all these years, my mixed background (mostly revolving around legal studies) still makes me feel as an impostor of sorts. Not an engineer nor an ICT by training, not an Information Management nor a Data Protection professional, I sit at the crossroad, try to learn as much as I can, and translate what I understand to others. Please engage with me if you think I’m wrong in form or substance. I beg readers to believe that whatever I write comes from a place of curiosity, learning and exchange, not teaching.

Also, as this is on the long side and nobody has the time for wordy stuff, I broke it down in sections. While the “Insights” section is slightly more narrative, the “Love”\”Hate” ones are mostly self-explanatory titles with a short comment. Please feel free to jump straight to what interests you, and disregard the rest.

Insights

  • Building a Portfolio approach to navigate complexity
  • Emerging technologies are not ‘out of grasp’ for public \ NGO sector
  • Practical example: Shaping a principled and useful Artificial Intelligence

What I love seeing

  • Private, public, academia, and humanitarian sector coming closer and closer to each other
  • New actors, players, experts from previously overlooked groups leaping forward
  • Intersectionality is increasingly concrete (but still a long road ahead)

What I hate seeing

  • Decolonization efforts being hijacked: Show us the money
  • Digital literacy is a nice-to-have among aid workers
  • Lack of spotlight on non-Global North led humanitarian research (plus an offer to Global South researchers)

If you want to reach out to discuss any of the issues below - or anything else really -feel free to write at giulio.coppi (at) nrc (dot) no or g.coppi (at) hey (dot) com

Caveat: This is not a M&E report r a peer-reviewed article, but a collection of personal \ professional opinions simplified and “neutralized” to remove confidential or NDA-related information. Opinions shared here are not necessarily endorsed by my employer or by the institutions I’m affiliated with.

Insights

Building a Portfolio approach to navigate complexity

Following on the steps of others (see for example Dark Matter Lab, Catalyst, UNDP and OECD), I have been experimenting with Portfolio approaches and I didn’t regret it one bit.

Do you remember the time when CEOs and very famous Organizational Development Consultants were shrieking against those distracted teams running in all directions, severely recalling the importance of focusing time, energy and resources on one single objective? Well, the portfolio approach is quite the opposite.

Somewhere in between our elders’ “don’t put all your eggs in one basket” and a cynical financial investor suggesting you to diversify your assets, the innovation portfolio is more than just a collection of projects. It’s a way to make tacit connections explicit, to quickly identify where “one project could leverage off another project’s capability” (Rahwidiati).

It is also a way to constantly remind yourself that uncertainty is a feature, not a bug, and should be dealt with accordingly. What can you afford to lose in that portfolio? Which project(s) — or components therein — can you freely bet on, and which one would have cascading negative effect on dependent initiatives? What would the domino effect be on each and every possible scenario?

The Modular Matrix below (with no projects names) is a tool I built to help me navigating the complexity of a set of projects with apparently little in common, but in reality connected by invisible lines that stem from the potential hidden in secondary components, “sleeper functionalities”, “nice to have” product requirements, and many other factors.

What you see at the center (“Core”) is our Beneficiaries and Service Management Database, currently leaving pilot phase to enter global rollout across all field offices. It is the cornerstone of all other projects, and the synthetic moment (Fichte\Hegel) of our portfolio. The existence of a shared layer allows to decrease the degree of technical separation between seemingly unrelated projects, ensures data standardization, and increases chances for future direct integration if need be.

It’s a very simple and frankly unoriginal tool, but it served me well and that’s all I can ask for.

Modular Matrix — Digital Portfolio

The Modular Matrix helped me to make sure our projects were spreading our bets across a large enough canvas, and that all dependencies and connections were visible or at least imaginable. But that still didn’t help me in qualifying our digital portfolio. Are we being bold enough? Are we squandering resources by going all in on crazy ideas? Are we too conservative or maybe divided, each one of our teams looking at their own belly button and ignoring the potential of expanding from digitization to digital transformation (Ross)?

I developed the Risk Radar (see below) to assess the distribution and balance of our projects, and make sure we make our investments in a conscious manner. Moving clockwise from bottom-left quadrant, we go from sector-specific projects harnessing well known technologies, to multi-country systems using familiar platforms, to potentially-transformative-but-wildly-unknown-solutions, and finally to very sector specific deep experiments pushing the boundaries of what we have seen happening so far.

Just as a clarification, the “safety\exploration” axis refers to our own exposure to failure, it is absolutely not related to potential risks to people. We design each and every process based on the worse case scenario, and we let go (or halt if the process is ongoing) any idea that might have a reasonable - even if remote - chance of resulting in any form of harm to people.

Risk Radar — Digital Portfolio

As it happens, even if you adopt a broad and open approach to innovation, some form of consolidation and negentropy is inevitable. Resources, and especially the willingness to dedicate them to innovative solutions, are limited and often unevenly distributed.

Some managers will “get it” more than others, thus actively attracting opportunities and seeking out additional resources when needed. It follows that more often than not, some teams will take on or volunteer for far more than their initial share of testing.

Far from being a problem, this is actually a blessing. It allows to run deep demonstrations, developing “new capabilities ‘by doing’ and to work ‘deep’ in a few places (as opposed to going broad in many). The idea behind this is to use the demonstrations to show real progress on the ground when it comes to system transformation, and to leverage this for a wider shift in how [the organization] tackles complex challenges.” (Begovic\Haldrup)

Our portfolio is rich and getting richer. We have now under the belt experiences with AI in EdTech, Multi-channel global comm systems, Biometrics (we opted out from biometrics following results from simulated tests — ping me if you want to know more or check this excellent report by Oxfam\Engine Room), Distributed ledger technologies, and in a few weeks advanced GIS and anticipatory analysis for humanitarian access.

When it comes to anticipatory analysis we are trying to completely flip how humanitarians usually imagine predictive analytics. Instead of entering the murky waters of “when and where the next mass displacement\conflict event will happen”, we decided to focus on ourselves, our role, and our potential for impact as enablers.

How can we best allocate our resources to achieve maximum impact in mitigating access restrictions given a set of most likely scenarios? Which factors can we use our influence on, to achieve positive results for the whole sector and local communities in their access to assistance? It’s not about us anymore, but rather how can we become better allies to those in situation of vulnerability.

Photo by Jorge Salvador on Unsplash

Emerging technologies are not ‘out of grasp’ for public\NGO sector

There is a common misconception that frames humanitarian\nonprofit\public actors engaging with emerging technologies as wannabes, attention-seekers, or pretentious.

Truth is, my experience so far leads me to affirm in good confidence that when it comes to emerging technologies, everybody — no one excluded — is winging it as they go.

It is not by chance that I started researching and working on emerging technologies when I went all in on HumTech, instead of focusing on mainstream approaches. I actually argue that humanitarians \ public actors have a moral obligation to understand, engage and influence future technologies, and tech companies an ethical duty to listen and learn. Unless, of course, their CEOs want to keep practicing the fine art of the “regreditorial” (Tiffany; Eveleth).

Contrary to popular belief, fully formed tech companies are never really agile, despite their dev teams obsessing about it. Once a tech stack is up, every small change requires huge efforts and precautions not to upset the juggernaut. Ask Facebook and Slack, who despite all their resources had to invest enormous resources and time changing their programming language as the original one didn’t age well enough to support an exponential scale of growth (JaxEnter; Metz).

This is also one of the reasons why huge tech companies invest in startups to then buy them up once their product prove to be mature and complementary to their existing tech stack: They simply cannot handle building some new verticals from scratch inside their own ecosystem.

Photo by Wolfgang Hasselmann on Unsplash

Humanitarians and public actors have historically waited for tech companies to make the first move. We all agree that having a commercially viable, tested and hassle-free product already available for licensing is reassuring and practical, but it also comes with a major downside. That product and the coding principles that inspired it, are now written in stone.

Your users cannot download the app and prefer communicating via SMS? The app requires offline mode to be carried in hard to reach areas? You cannot use it without E2E encryption? Your access level management is way more complicated than the one offered by default? You don’t really need a lot of dashboards but would rather have templated reporting models? You don’t have a dedicated full time resource using the system but you would rather have your field teams taking turns in using some scattered functionalities here and then? Whopsies.

More importantly, once the juggernaut is up and running, both aid actors and tech companies will be out of luck if they try to mitigate risks linked to its implementation. That train has gone, now it’s just time for avoiding harm by rolling back deployment, or setting up solid damage control strategies. Worldwide, researchers from Science and technology studies (STS) have been ringing this bell for decades but their pleas have been largely ignored. That is, until former Big Tech experts decide to pay amend for contributing to a tech dystopia and downright re-invent STS as a new thing (Irani & Chowdhury).

Even STS, however, cannot do it alone. Mostly, the field adopts a human rights lens in researching technologies, but the humanitarian gaze is just as important. While STS often fear the abuse of new systems or their side effects, humanitarians fear any system as a whole. Operating in environments where vulnerabilities can be brutal and power imbalance extreme, we know that anything that can be weaponized will be, at some point.

In short, when it comes to technology humanitarians embrace Checkov’s Gun principle, but to us everything is a gun.

Photo by Hector Falcon on Unsplash

Practical example: Shaping a principled and useful Artificial Intelligence

An example from the ‘tech flavor of the year’ can better show the conundrum of balancing tech-shaping and humanitarian principles, but also the importance of doing it. Artificial Intelligence (AI).

AI is still a very early design, with very few functionalities advanced to a stage where human supervision is unneeded or unhelpful. Mostly, successful deployments of advanced AI systems have shown results in fields were immense datasets needed both out of the box (but logical) thinking and immense processing power to fill some cognitive gaps that would’ve required a decade of research from a whole team.

But does the same also work for that fancy AI system a tech company pitched as groundshaking and plug-and-play for humanitarian applications? Not really. It will one day maybe predict broad patterns of conflict and displacement, but it still won’t be granular enough to really replace the need for direct evaluation and needs assessment. In the end, there are good chances that AI systems for conflict and displacement won’t be able to “predict” much more than a local conflict analyst who knows trends and patterns of violence in the area.

The need for humanitarians putting their heads under the hood of the AI machinery goes beyond the simple protection from being disappointed, or avoiding wasting their time and money. As humanitarians we too often tend to forget that the only ones that can really ‘predict’ conflict-related displacements are those actors who are active parties to the conflict. If you can do the same- or if you contribute to an AI system that ends up doing that — you may have crossed a line from which there’s no way back.

Right now, however, many such algorithms are mostly trying to make the best use of insufficient and polluted data to isolate conflict related signal from overwhelming noise.

There’s no way to ignore the elephant in the room. When tasked to build your smart system, tech companies rely on:

1) Openly accessible or commercial datasets, and;

2) Your majestic data warehouse\lake constantly refreshed by hundreds data pipelines from several horizontals;

The ever blessed Humanitarian Data Exchange (HDX) team has been working on the open side of #1, while on #2… Well let’s just agree not to reopen the white lies we told ourselves in the first half of this decade about Big Data. Telford (HDX Centre Lead) explains it in a very diplomatic and professional way when she says that “the challenge in the humanitarian sector is bringing together small amounts of non-standardized data, mostly stored in spreadsheets” (Telford).

With a few notable exceptions, most organizations struggle to wield the amount of data they actually have available (Johnson) which, to be honest, is ridiculously low if compared to the staggering amount required to train deep learning algorithms. Btw, if you have an idea about a potential AI application and wonder how much data you might need, there are some formulas you can use (Mitsa). As a general rule, however, the more complex the model the more data you need. The more accuracy you require, the more data you will have to provide to the system.

Photo by Matthew Ball on Unsplash

You can eventually train AI on small data, as long as it’s good enough and the scope quite narrow. We experimented on a AI chatbot that was supposed to be an assistive tool allowing self-help options for users. Unsurprisingly, we ground to a halt the moment we draw our red lines about registering individual users, harvesting their PIIs, and track live usage data to feed the beast. Like an overzealous student, the algorithm cannot learn if it doesn’t grab everything it can from the subject of study. And like an organism in a state of addiction, it needs more, more and always more to sustain its own dependence.

In short, ‘Small data for AI’ might work, but it’s very likely to be the wrong kind of data from a Do No Digital Harm perspective (Dette). Again, this doesn’t mean that humanitarians should disengage from AI, quite the opposite. The more we engage and learn, the more we can understand potential and pitfalls, and possibly trace a path towards “principled AI”.

As Dr. Fast and I already stated for distributed ledger technologies and blockchain (Coppi & Fast), humanitarian and tech companies should refrain from deploying silver bullet applications of untested or unproven platforms on people in situation of vulnerability. Emerging technologies can and should be informed and shape by aid actors, but starting from backend functionalities. Any other path would likely result in unethical humanitarian experimentation (Sandvik, Jacobsen, & McDonald), unless the system is ideated to be “humanitarian by design”.

What I love seeing

Private, public, academia, and humanitarian sector coming closer and closer to each other

When I started, tech researchers looked at us humanitarians as the dirty peasants barely able to communicate. Most NGOs had agreements with universities mostly for internship management and master thesis supervisions. There is now a strong global network of research centers churning out high-value peer reviewed research on purely humanitarian topics, and PhDs supported or adviced by NGOs are not unheard of anymore.

In the eyes of big companies, humanitarians were the beggars coming to ask for something, and we were usually shooed away with the promise of some free yearly software licences (or charity discount rates) and eventually some pro bono hours for support. Right now, many big corporations have hired aid experts to liaise with the sector, and sometimes even co-design products with us.

Historically, the public sectors wanted to get the best value for their aid money through rigid grant systems, and to protect it through solid audit power, while allowing for no or little overhead or research budget lines in the mix. We’re currently engaging with public donors on innovative procurement practices, innovation platforms, and broad public\private partnerships based on open market engagement.

Photo by Gayatri Malhotra on Unsplash

New actors, players, experts from previously overlooked groups leaping forward

When I started, there were few experts known beyond the inner circle of those engaged in humanitarian policy, and mostly on traditional topics of humanitarian action. Very few “technical” experts were actually known to main street public.

Social media platforms and the emergence of a global discourse on the power of tech and digital for good as opened up a space that brought forward new actors and players from all walks of life. Even some classicists have recently pivoted to the digital dimension of aid and their ethics, after engaging more and more with demanding audiences on social media. This is still less true for non-Global North experts, and that’s a fight to keep up (see more in “Hate” section).

I file this under a positive note because I feel optimistic about the current direction. Special shout out to the International Revue of the Red Cross for example, who is now accepting submissions for a special edition on ‘Emerging Voices in International Humanitarian Law, Policy and Action’ as part of their commitment to create publication opportunities for “new, underrepresented and diverse voices”.

Intersectionality is becoming concrete (but still a long road ahead)

Absurd as it might sound, the humanitarian sector has been extremely resistant in accepting even the mere concept of intersectionality. It’s beyond me to guess the deeply historical, cultural and sociological reasons behind this resistance, but I guess that the homogeneous socio-economical background of the aid workforce of the past century surely didn’t help.

Whatever the causes, as a result for a very long time humanitarians have considered some people to be “less equal than others” when it comes to the coverage offered by the humanitarian imperative. The complete erasure of non-binary people, indigenous communities’ rights, and the blind eye on sexual violence against men are just a few examples.

Even today, many NGOs M&E manuals don’t offer any other option apart from Man\Woman when registering people in situation of vulnerability. Unsurprisingly, this directly reflects parameters from donors. Many unfortunately sport very progressive policies at home, but don’t seem to mind a discriminatory approach in foreign policies.

What I love to see is finally a fierce battle from new generations of humanitarian experts to change all this. Once more, social media are playing a big role in this. Abuses are being called out — not always, but increasingly often — and it’s almost impossible nowadays for humanitarian management to just say “I wasn’t aware” without getting publicly dragged (as they should).

In this sense I am grateful to current digital transformation processes: They’re making it increasingly hard to hide deeply entrenched bigotry and discrimination by making more visible and deliberate those choices that before were filed under “that’s just how it is”. Today, organizations are showing their true colors and giving us a chance to fight for change.

What I hate seeing

Decolonization efforts being hijacked: Show us the money

Let’s face it: Decolonization is mostly about assets and resources management. Behind the window dressing of high-level statements, I see the decolonization agenda unfolding as a pitiful fig leaf for traditional funders and donors to just reduce funding to traditional humanitarian assistance.

Pitting NGOs from Global North and Global South against each other by throwing a tiny bone in the middle is NOT a decolonization move. Decolonization means directing resources to local or ‘non Global North actors‘, not just slashing funding to their Global North counterpart.

Ideally, the decolonization effort should involve an increase in funding to allow Global South organization to grow and become more and more competitive, while continuing to support INGOs’ role as subsidiary presence when local capacity is overwhelmed or political neutrality is strictly required.

This is especially true in humanitarian innovation and digital where we all need to grow together. The system cannot work if only part of it is evolving, as it is clear from the current set up where local NGOs are often not given the lifeline to build digital skills and tools. Consequences are clear: Systems cannot go end-to-end, data models cannot be harmonized and standardized, enhanced data protection and cybersecurity concerns, just to name a few.

Photo by Jason Leung on Unsplash

Digital literacy is a nice-to-have among aid workers

Humanitarians have long proclaimed their pride in being “pen and paper” people. This blends with a romantic bourgeois self-attributed view of aid workers as someone smart and overskilled who decides to put values over career, someone who escape from the frivolous frills of a tech-obsessed society.

Hiding behind often vaguely racists arguments (here tech just doesn’t work; nobody would understand it here; etc) many humanitarians still today revel in their self-absolved digital ignorance and protect it as hard as possible from this encroaching evil tech world that want to snatch from me the purity of my profession.

Nobody ask you to love technology dear humanitarian, but ignorance is not an option anymore. Do No Digital Harm doesn’t mean you play if safe by living in the 60's. I’m sorry but you’re now a liability for the humanitarian sphere: Those who don’t understand tech or data have zero chances of protecting the data they collect with their teams.

Instead of preserving people’s data by refusing tech gadgets, you’re the weakest link in the protection system. Chances are, that you already blissfully spread out somehow the data of vulnerable communities that trust you with their information.

This is bad news enough in general. But it’s when this worldview ends up in management roles, that a lot of real problems start to unfold at scale. Digital is today part of anyone’s portfolio, even in the humanitarian industry. Not understanding the basics of data and tech means not being fit for purpose… Or for management.

Take time to learn. Educate yourself and your teams. Ask those who can to explain to you how data system work, where the risks lie, and what shall we do to mitigate it. You can absolutely choose not to use technology but make it the result of strategy, not prejudice.

Lack of spotlight on non-Global North-led humanitarian research

I know there are tons of excellent material on humanitarian technology or innovation produced by researchers from the Global South. I also know that they are very hard to find through the main public platforms.

Some authoritative journals have substantially opened up to try and fill this gap, such as the already mentioned International Revue of the Red Cross (now open to submissions for a special edition on ‘Emerging Voices in International Humanitarian Law, Policy and Action’) or the Journal of International Humanitarian Action. Also, both waive costs for submission, which is often a major obstacle for less-resourced researchers.

Still, the humanitarian knowledge sharing bubble is mostly confined to Global North funded and hosted research centers and publications who still dictate the publishing strategy and research agenda. Far from being a humanitarian issue, this is a problem affecting the global field of research as a whole. The fact that Global South voices are hosted by Global North publications means that authors have to conform their ideas and expression to the language, editorial guidelines, formatting, and vetting of Global North-led processes.

Among other things, this is of paramount importance to ensure that Global South communities have a say in their own future, instead of being passive recipients of foreign technologies deployed by INGOs. The future of technology is being made made based on decision we take every day, and those decisions needs to be informed by the most diverse sources of knowledge to have some hope to be inclusive and global, instead of just a projection of a certain demographic of early adopters with easy access to capitals.

I want to close these ramblings with an appeal to those doing research on humanitarian data and tech in the Global South. If you need support, introductions, feedback, peer review, or a co-author under your lead, feel free to write me. I will do what I can to help out, be it just amplifying your research and your work.

Photo by Christa Dodoo on Unsplash

--

--

Giulio Coppi
Giulio Coppi

Written by Giulio Coppi

Global Tech & Digital Specialist at Norwegian Refugee Council, Humanitarian Innovation at IIHA Fordham and Fellow at IARAN. Building Human Rights inspired Tech

Responses (1)