Human behavior creating global warming is metaphysically immoral and veganism is a moral solution.

In the previous post we established that not changing our behavior in response global warming is immoral. In line with this, according to a report by two World Bank advisers the animal agriculture industry surprisingly contributes to around fifty-one percent of all global emissions. From this study we can conclude that consuming less meat would dramatically reduce our harmful impact on the planet. But why haven’t we heard of this before?

To answer this, the video above shows some of the statistics found in a documentary called Cowspiracy, and it explores why this might not be as well a known cause as the direct burning of fossil fuels. Reasons provided are the reluctance of charities to confront the public about such a large change in behavior, and the power of the animal agriculture industry in stamping out dissent.

But in addition to morally valuing biological life on earth by not suffocating it with inorganic CO2 there is another benefit of not consuming animal meat. This benefit is the correct valuing of biologically more evolved animals over that of their less evolved counterparts – plants and grains. As Robert Pirsig writes in Lila:

An evolutionary morality,.. would say [eating meat is] scientifically immoral for everyone because animals are at a higher level of evolution, that is, more Dynamic, than are grains and fruits and vegetables.. It would add, also, that this moral principle holds only where there is an abundance of grains and fruits and vegetables. It would be immoral for Hindus not to eat their cows in a time of famine, since they would then be killing human beings in favor of a lower organism.

Robert Pirsig

Thirdly, that’s not to mention the growing list of health benefits that can be found in reducing the amount of meat in your diet and improving the overall biological quality of the people on the planet.

Therefore these three key reasons make veganism moral on many levels and supported by the evolutionary hierarchy of the Metaphysics of Quality.

There’s a great video on Youtube(above) called ‘The War on Science’ by ASAPScience which outlines an oft misunderstood conflict. That conflict is when:

“Science and society are often at odds”.

Putting the conflict in these terms clearly shows the wrong-headed thinking of those who are undercutting the intellectual values of science with the social values of society. Current social norms may be more convenient to defend and continue for society but it not intelligent to continue thinking the same thing when evidence shows otherwise.

In fact, rather than simply wrong-headed, such defence of social values in the face of intellectual values to the contrary, is immoral and not supported by the MOQ.

The historical risk though, is that without the Metaphysics of Quality the intellectual level can start to undercut the quality of society and defend biological values at the risk of social cohesion. This could well explain why many a political conflict throughout the world simply are between those who defend social values vs those who support intellectual ones.

The MOQ however, shows there is a more nuanced way to view social vs intellectual conflicts such as this. Within the structure of the MOQ is the ability to morally defend intellectual values while not risking social decay in the process. This is clearly shown with the MOQ’s ‘Codes of Morality’ and in the difference between ‘The Law’ and ‘Intellectual Morality’ the latter of which is not acknowledged with our current Metaphysics.

I’ve seen lots of talk recently about the moral threat of AI. So, what does the MOQ have to say about it?

To start with, how about a fact which appears to be lost in much of the discussion.

No computer has ever made a moral judgement which it hasn’t been told to make and so there is no reason to think this will ever change. Believing this will change spontaneously as a result of improved intelligence of machines is just that, a leap of faith, and not supported by evidence. As it stands, it is the human programmer making all moral judgements of consequence. Computers, being 0’s and 1’s, are simply the inorganic tools of the culturally moral programmer.

Unfortunately though, this isn’t likely to be appreciated any time soon because of a philosophical blind spot our culture has. That blind spot is our metaphysics which neglects the fundamental nature of morality and in doing so gets confused about both where morality comes from and whether machines can make moral judgements independently of being instructed to do so.

For example, in the case of a recent foreign affairs article – Nayed Al-Rodhan appears to believe that AI will start making moral judgements as a result of more ‘sophistication’ and learning and experience.

“Eventually, a more sophisticated robot capable of writing its own source code could start off by being amoral and develop its own moral compass through learning and experience.”

The MOQ however makes no such claim which, as already mentioned, is contrary to our experience. According to our experience it is only human beings and higher primates who can make social moral judgments in response to Dynamic Quality. Machines are simply inorganic tools and their components only make ‘moral decisions’ at the inorganic level.

That’s not to say though, that there aren’t any dangers of AI and that all risks are overblown. AI – being loosely defined as advanced computational/mechanical decision not requiring frequent human input – threatens society if it is either poorly programmed and a catastrophic sequence of decisions occurs or if it is well programmed by a morally corrupt programmer. However each of these scenarios aren’t fundamentally technological but philosophical, psychological & legal in nature.

The unique threat of AI is this aforementioned increase in freedom of machines to make decisions without human intervention making them both more powerful and more dangerous. The sooner our culture realises this, the sooner our culture can start to discuss these moral challenges and stop worrying about the machines ‘taking over’ in some kind of singularity apocalypse. Because unfortunately, if we don’t understand the problem, a solution will be wanting, and therein lies the real threat of AI.