FrankColeman-CBISwinsICCR-LegacyAward-GreenMoney

What’s Next on a Random Walk Down Facebook Lane

By Francis G. Coleman, Executive Vice President, CBIS

“Sometimes data behaves unethically…an algorithm that draws its lessons from the present reality can’t be counted on to improve the course of the future on its own.”

– Antonio Garcia-Martinez, a former Facebook employee

 

We have officially entered the twilight zone.

As we have been speeding quickly along the information highway, enjoying our newfound access to information at breakneck speed and sometime encountering some speed bumps, never was it anticipated that humanity would get in the way.

The beauty of algorithms, as I have been told, is that they don’t think for themselves. They make the perfect decisions without all of the baggage that we humans have. No emotion…no conflicts of interest…no judgement…just pure rationality based on aggregated data analyzed over and over and over again…

At least that’s what we have been told. But we are now finding out, that in fact algorithms are not perfect. And if they are left on their own, we run the risk of them being outsmarted by really smart people who truly understood their failings and were able to exploit them. Do a search for “algorithms and humanity” and you get a plethora of headlines:

“Your Life is an Algorithm, Your Brain is an Operating System…”

“An algorithm can predict human behavior better than humans…”

“The Algorithm that will Save Humanity… or Destroy It…”

“Humanity and human judgment versus data-predictive modeling…”

And the recent and not so recent public criticisms of Facebook, from serving as a gathering point for hatred, to being used as a tool for the dissemination of information meant to undermine our democratic ideals and structures, are all part of the fabric of the same Facebook that has saved my relationships with many of my friends.

As a constantly traveling, “out of touch” individual who, in my limited contact with friends, was testing the limits of my friendships, I have been transformed into a fascinating traveler who is able to share parts of my life with his Facebook family, and now feels connected and present in his life.

Be careful what you ask for…

But the problems at Facebook and its cohorts are about much more than just an ethical lapse. It goes deeper than that.

Facebook, the ultimate connector of humanity, also had some very practical concerns to think about. The challenge was how to move it from the dorm room to the boardroom. How to create such value that people or institutions would be willing to pay for it. How to monetize a great connector of humanity, into a premier business?

That is the “genius” of Facebook and its ilk. With every posting and every emoji and every update that we post, we are giving clear and specific signals about ourselves. We are sharing our shopping choices, our musical tastes, our fears and joys. And in the background, sits this magical algorithm that gathers all this intelligence, processes it, and then “distributes” it to the hoards that are eager to sell things more efficiently. And Facebook is not alone in its use of this technology to make us more efficient targets for advertisers and marketers. There is a long line.

Its business model is built on the premise that algorithms are a cost effective way to sell its product, which is information about you and me. This reduces its implementations costs tremendously and increases its profits exponentially. Remove the human factor (labor costs) and you become a half a billion dollar company!

Facebook (sic) “never intended or anticipated this functionality being used this way–and that is on us.” – Sheryl Sandberg, Facebook COO in response to use of FaceBook for Anti-Semitic activity

And so what does all of this have to do with ESG?

Well, one can only wonder what would have happened if Facebook had only put some of its activity through an ESG lens. They might have been able to anticipate possible risks to the business model.

There are three lessons to be learned from the experiences at Facebook.

First, Agnosticism Comes at a Price

We live in a values laden society. To presume that those values can be discounted and ignored and factored out of the algorithmic equation ignores the current reality of our society and world. It is still people who are using the platform and people are imperfect.

One of the ESG lenses that may have been helpful to Facebook and its cohorts would be to ask the questions:

• “How might a bad actor exploit our platform for illegal or illegitimate purposes?”
• “Are there values/behaviors in the marketplace that we would not want our brand to be associated with?”
• “What is the likelihood that those values can access our platform?” and “Can we/do we want to prevent them from doing so?”
• “Is there any scenario in which being agnostic to the values across our user base is a threat to our brand?”

You get where I’m going here?

It would be interesting to be a “fly on the wall” at a board meeting where these questions were asked. I suspect it would have led to spirited conversations and interactions. I also suspect it might have led to a different outcome than simply stating “we did not anticipate…”

It is not possible for an agnostic technology to rule the world. As long as the world is made up of people, it will continue to be a non-rational, messy place, not controlled or managed solely by an algorithm.

Humans are still needed.

Second, Human Sensitivities Trump Algorithms

It’s not clear to me whether it is ever possible or desirable to totally expunge human sensitivities from decision-making, especially for a company that operates across a spectrum of cultures and continents.

Looking at data through a human lens can sometimes add perspective to the impact of an action. Again, the algorithm, while exceedingly smart and fast, does not yet have the ability to look at the world through a variety of lenses that reflect the increasingly complex world that we live in.

It is absolutely clear to me that algorithms process data and make decisions FASTER than a human can, even an exceedingly smart human. But it is also clear to me that it does not necessarily make BETTER decisions than humans.

Until judgement can be imbued into the algorithm, then humans are far better positioned to know when certain decisions might cause ripples in the virtual world.

Third, Being the “Smartest Person in the Room” is Sometimes a Disadvantage.

Well, it’s not just being the smartest person in the room that is the problem, it is when you know and act like the smartest person in the room and discount any other warning signals that emerge to suggest an alternative plan of action.A little dose of humility with those smarts might be the right mixture to not be tone deaf. Remember that: HUMILITY not HUBRIS!

************

Now, don’t get me wrong. Like I said, I credit Facebook with preserving my friendships around the world and making me a caring, attentive friend.

I would feel so much better if they would also understand that humans are integral to their brand and when complemented by algorithms, they can make an excellent product even better. And they could also better manage that risk to their brand that seems to be dogging them these days.

ESG is here to help.

Responsible algorithms anyone?

 

Article by Francis G. Coleman, Executive Vice President at CBIS (http://cbisonline.com), an investment advisory firm that provides investment services to the Catholic institutional market. Mr. Coleman is responsible for corporate strategy and planning, strategic planning and board member and trustee relations and development, as well as overseeing the Catholic Responsible Investing (CRI) and Information Technology departments at Christian Brothers Investment Services (CBIS). During his tenure, he has held the following positions: Director of Socially Responsible Investing (1994-1999), Director of Marketing and Participant Services (1989-1994), Director of Participant Services, and Operations (1987-1989). As Vice President and Director of SRI (1999-2002), he was responsible for incorporating ethical standards into investments and developing a policy and approach for CBIS that reflects the Church’s broad concerns in an effort to impact corporations. He is on the boards of the CUIT Trustees; the IRRC Institute, a research center for social, environmental, and corporate governance issues; Georgian Court University, a Catholic university in New Jersey sponsored by the Sisters of Mercy; and an advisor to the Park Foundation, which supports environmental and educational causes. He formerly served as Chair of the Board of Partners for the Common Good, a community investing program sponsored by CBIS; as well as Vice-Chair (1997-1998) and Chair (1998-2001) of the Board of ICCR (1997-2001). He served on the Investment Committee of the American Friends Service Committee (AFSC) from 2009 to 2015. He is a member of the Social Venture Network; is on the Board of the Montessori School of Syracuse; and served a nine-year term on the Board of the Leviticus Fund. Mr. Coleman holds a BA from Columbia University.

He is also a member of the SRI Committee for the SRI Fund, an alternative hedge fund serving primarily Catholic investors, and of the Independent Committee of the STOXX Christian Values Index, a screened faith-based index of European stocks. In addition, he serves on the Investment Committee of the Interfaith Center on Corporate Responsibility (ICCR).

Featured Articles, Sustainable Business

Leave a Reply

Your email address will not be published. Required fields are marked *

Signup to receive GreenMoney's monthly eJournal

Privacy Policy
Copyright © GreenMoney Journal 2024

Website design & development by BrandNature

Global Events Calendar

View All Events

may

22mayAll Day23Responsible Investor Japan 2024 - Tokyo

22mayAll Day23Moral Money Summit Europe – London

22mayAll Day24Circularity Conference: Accelerating the Circular Economy - Chicago

X