#philosophy of science

LIVE
philosophycorner: How to Use the Feynman Technique to Identify Pseudoscience By Simon Oxenham In lat

philosophycorner:

How to Use the Feynman Technique to Identify Pseudoscience

BySimon Oxenham

In late 2015, a study made headlines worldwide by bluntly demonstrating the human capacity to be misled by “pseudo-profound bullshit” from the likes of Deepak Chopra, infamous for making profound sounding yet entirely meaningless statements by abusing scientific language.

This is all well and good, but how are we supposed to know that we are being misled when we read a quote about quantum theory from someone like Chopra, if we don’t know the first thing about quantum mechanics?

Continue Reading


Post link
philosophycorner: When Science Went Modern By Lorraine Daston The history of science is punctuated b

philosophycorner:

When Science Went Modern

By Lorraine Daston

The history of science is punctuated by not one, not two, but three modernities: the first, in the seventeenth century, known as “the Scientific Revolution”; the second, circa 1800, often referred to as “the second Scientific Revolution”; and the third, in the first quarter of the twentieth century, when relativity theory and quantum mechanics not only overturned the achievements of Galileo and Newton but also challenged our deepest intuitions about space, time, and causation.

Continue Reading


Post link
philosophycorner: You thought quantum mechanics was weird: check out entangled time By Elise Crull I

philosophycorner:

You thought quantum mechanics was weird: check out entangled time

ByElise Crull

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’ Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

Continue Reading


Post link
philosophycorner: This Is Why Understanding Space Is So Hard By Dan Falk If all the matter in the un

philosophycorner:

This Is Why Understanding Space Is So Hard

By Dan Falk

If all the matter in the universe suddenly disappeared, would space still exist? Isaac Newton thought so. Space, he imagined, was something like Star Trek’s holodeck, a 3-dimensional virtual-reality grid onto which simulated people and places and things are projected. As Newton put it in the early pages of his Principia: “Absolute space, of its own nature, without reference to anything external, always remains homogeneous and immovable.” 1

This seems persuasive in everyday life. I’m walking east, you’re walking west, and the post office stays put: The frame of reference remains static. But Newton’s contemporary, the German mathematician and philosopher Gottfried Leibniz, balked at this idea of absolute space. Take away the various objects that make up the universe, he argued, and “space” no longer holds any meaning. Indeed, Leibniz’s case starts to look a lot stronger once you head out into space, where you can only note your distance from the sun and the various planets, objects that are all moving relative to one another. The only reasonable conclusion, Leibniz argued, is that space is “relational”: space simply isthe set of ever-changing distances between you and those various objects (and their distances from one another), not an “absolute reality.” 2

Continue Reading


Post link
Opinion: Why science needs philosophyBy Lucy Laplane, et. al.Despite the tight historical links betw

Opinion: Why science needs philosophy

By Lucy Laplane, et. al.

Despite the tight historical links between science and philosophy, present-day scientists often perceive philosophy as completely different from, and even antagonistic to, science. We argue here that, to the contrary, philosophy can have an important and productive impact on science.

Continue Reading


Post link

discoursedrome:

transgenderer:

tumblr left was really into the term “lived experiences” for a while but bizarrely nobody seemed to acknowledge that the phrase implies a very weird view of anecdata vs. science

This attitude comes up a lot in academic crit-theory circles, and it’s popular among campus activist types, so you definitely see it there, but I think there’s kind of a game of telephone where a lot of the details got washed out by the time it made it to Tumblr.

I’m not too well versed myself but the basic argument is sort of a Foucaldian one about gatekeeping – the idea is that what constitutes “science” and “proof” is not simply an optimized truth-seeking procedure but a set of rules about who can legitimately seek truth and what that looks like, which is designed to concentrate the ability to declare something “truth” or “knowledge” in the hands of a social elite, as a means of preserving their social standing and control. In other words, the argument is that at least part of the reason we prefer “science” to “anecdotes” is because anecdotes are for poor people without access to academic infrastructure and the social codes and upbringing that could grant them that access.

There’s better and worse versions of that position, I think. Like it’s not unusual to run into people who think science (or “Western” science) is purely colonial propaganda and the subjective personal experiences of Marginalized People are an inherently superior source of wisdom – mostly the sort of crystal-healing-TCM-type of leftist who talks about goddesses and traditional cultures a lot – and maybe Tumblr skews that way a bit more because it’s full of teens and buzzwords and medievalpoc followers. But you definitely also run into people who recognize that science is a superior form of truth-seeking but want to disentangle it from the social control mechanisms that it does contain, people who recognize that a lot of what we call science isn’t actually necessary or even valuable to empiricism. And that’s useful because it leads to things like qualitative research, serious consultations with “traditionally studied” groups by the people who study them, and efforts to democratize science by improving access to data, education and publishing, recognizing immigrants’ foreign credentials, and so forth.

It’s hard to take a Foucaldian approach to this sort of thing because you need to do a lot of compartmentalization to prevent it from reflecting back on you – my usual objection is that very few people reach the should-be-obvious conclusion that their counterparadigm is, equally, designed to favour them as gatekeepers of truth, and that much of the struggle is a pure turf war in which the perception of truth is as valuable as the real thing. But you do find good takes in this universe of criticism, it’s just that – ironically – they require a nuance that tends to be lost outside of ritualized academic formats.

siberian-khatru-72:

possessivesuffix:

siberian-khatru-72:

max1461:

siberian-khatru-72:

max1461:

max1461:

I think it’s worth remembering that, for language families like IE and Semitic, the comparative method alone did not give us >5000 year old reconstructible proto-languages. The comparative method gave us 1500-2000 years, and we applied it to textual sources that were already >2000-3000 years old. Based on families with confident proto-language reconstructions that don’t have significant pre-modern written attestation, I think 3000 years is a better rule of thumb for the maximum time-depth at which the comparative method is really effective. Of course that’s just a rule of thumb—if someone can actually demonstrate an older relationship with systematic sound correspondences in core vocabulary and morphology then I’ll change my tune.

@kaumnyakte-ultra

True, but IIRC still only three or four thousand years. And the fact that huge chunks of the family are spoken on relatively isolated islands basically provides the ideal environment for the comparative method to succeed. We’re extremely spoiled by Austronesian, in comparison to like, the Amazon (which is what originally got me thinking about this), which is one of the least friendly environments possible for historical linguistics.

image

This is from Blust’s “The Austronesian languages”.

So, out-of-Taiwan expansion was already underway by 4800 BP, and the breakup of Proto-Austronesian in Taiwan must have occurred even earlier.

Also, the idea that island environment impedes language contact is a myth; even in Polynesia contact was widespread. I doubt that there is even one Oceanic language that does not have loans from other Oceanic languages.

Fair enough, that’s quite a bit older than I thought it was.

With regard to island environments, I wasn’t talking about language contact between already-differentiated varieties as much as I was talking about the fact that forming large dialect continua is more difficult, so subgrouping is in some sense cleaner and things are more closely aligned to the neogrammarian model with linearly orderable sound changes. But maybe this is not really true either, I’m not sure.

Well, sound changes are always linearly orderable. It’s just that in dialect continua the order of changes may be different for different varieties, since the changes themselves spread by contact. In clear-cut subgroups the order would be identical for all languages; such subgroups result from bottleneck effect, usually during migrations - and there were plenty of migrations in the Amazon. 

If you look at the most successful applications of the comparative method to modern languages - Austronesian, Bantu and Algonquian - you’ll find that there are few clear-cut subgroups in these families. Algonquian has an Eastern Algonquian subgroup, but hardly anything else; Bantu is divided into “zones” which are not subgroups, and Austronesian does have Malayo-Polynesian and Oceanic, but most really conservative languages outside of Taiwan belong to “Western Malayo-Polynesian”, which again is not a subgroup.

Of course, if you do have clear-cut subgroups, you can (and must) compare reconstructed intermediate protolanguages, which immediately adds one or two thousand years to your supposed time limit. Uralic reconstruction is based on comparing Proto-Finnic, Proto-Mansi, Proto-Samoyed, etc. Each of these low-level reconstructions is pretty solid, except perhaps Proto-Permic. I think that reconstructed Proto-Finnic is more useful for Uralic reconstruction than Gothic is for Indo-European.

Do I need to now start crossposting here discussions I just got done posting on Twitter…

What does “the comparative method being effective” mean exactly? Identifying a relationship at all? Identifying enough regular correspondences to sketch a reconstruction? Being actually certain that the reconstruction is broadly correct? The first clearly works at least up to 6000 years, with sufficient finesse probably more. The second clearly works at least up to 4000–5000 years.

The third is, yes, much more trouble. Even in IE we keep having debates over things like laryngeal theory and glottalic theory, large parts of them not depending on the correspondences per se but the phonetic typology of the assumed reconstructions and sound changes. Frankly I think this is actually fundamentally uncertain for any bottom-level proto-languages, no matter if 5000 or 500 years old: there are too many possibilities for isomorphic reconstructions. But add any solid outgroup evidence — a relationship that is known but not necessarily reconstructed — and a lot can be resolved. Sometimes loanword evidence might work as outgroup evidence too (very much the case for Finnic: e.g. Baltic loanword evidence will resolve that core *ht ~ South Estonian *tt is < *kt), but further back in history, any identifiable proto-node is ever more likely to not have been close enough to any other proto-node for this to work.

Intermediate proto-language uncertainty will still remain in figuring out what innovations are shared because they occurred before the split-up of Proto-Intermedic, and which are shared because they’re areal Common Intermedic, though that does at least amount to knowing that there was a given innovation in a given direction.

Yes, there is clearly a continuum as we go from more recent to more deep relationships, and the further we go, the less we can reconstruct.

But it is actually very misleading to frame this in terms of absolute time (”a cut-off point for the comparative method”). There are two reasons for this. 

First, you can date the breakup of a proto-language only by comparison with an archaeological dating, and/or by glottochronology (if you accept this method) - and in both cases you need a decent reconstruction. That is, you cannot assign a date to a language family unless you’ve already applied the comparative method. So by definition you cannot know the depth of a protolanguage that you cannot reconstruct.

Second, and more important, our ability to make a sufficiently detailed reconstruction depends not only on absolute time, but also on many other factors: level of documentation of daughter languages, number of daughter languages, rates of lexical replacement, availability of more or less conservative languages, possibilities for internal reconstruction (the more non-trivial morphophonology, the better), structure of the family (the more intermediate nodes in the tree, the better), etc. And of course, it depends on how much time and effort was put into attempts to reconstruct a proto-language - the main advantage of Indo-European is not the availability of ancient languages, but the sheer number of linguists engaged in reconstruction.

The meaninglessness of various figures for “the cut-off point of the comparative method” was shown long ago by Manaster Ramer in his paper on “Uses and Abuses of Mathematics in Linguistics”, and it is rather sad to see the same old notion of the “time limit for the comparative method” repeated again and again.

Right, I very much don’t want to claim these are consistent points for how far comparison works, they’re examples of roughly how old some known / reconstructed relationships have turned out to be (i.e. clearly more than 3000 years) Sufficiently hard cases that are younger but unworkable might also exist. Though how would we know exactly? Very good point too that relationships are identified first and their age determined only afterwards.

Manaster Ramer has been writing other good points on this too. In the alluded on Twitter discussion, de Carvalho recommended his 2000 paper with Baxter: “Beyond lumping and splitting: probabilistic issues in historical linguistics”.

Finally, the idea that our methods allow us to ‘prove’ language relationships to a certain limit, beyond which the responsible scientist must refrain from speculation, reflects a nineteenth- century inductivist ideology of science which is now rightly discredited. In the inductivist view, scientists carefully observe facts, their minds uncontaminated by preconceived notions or hypotheses; and they prove new scientific results by applying a fixed code of valid inductive principles to their observations. (A ‘method’ in the narrow sense, as in ‘the comparative method’, is a code of this kind.) As long as scientists unswervingly follow this procedure, it is believed, the truth of their results is assured, and the store of legitimately proven scientific knowledge is gradually increased. But speculations not firmly grounded in observation undermine the legitimacy of the whole process, and pollute the inquiry from that point on.
This view, though too rigid to follow in practice, and now largely abandoned by philosophers of science, still survives among the defence mechanisms of our field. By suggesting that hypotheses about deep linguistic relationships are forever beyond the reach of legitimate scientific inquiry, it is now doing a disservice by unnecessarily and prematurely discrediting some of the most interesting lines of inquiry open to us. We urgently need more discriminating defences which will protect us without exacting this high price.
Taming the multiverse: Stephen Hawking’s final theory about the big bangProfessor Stephen Hawking’s

Taming the multiverse: Stephen Hawking’s final theory about the big bang

Professor Stephen Hawking’s final theory on the origin of the universe, which he worked on in collaboration with Professor Thomas Hertog from KU Leuven, has been published today in the Journal of High Energy Physics. 

The theory, which was submitted for publication before Hawking’s death earlier this year, is based on string theory and predicts the universe is finite and far simpler than many current theories about the big bang say.

Professor Hertog, whose work has been supported by the European Research Council, first announced the new theory at a conference at the University of Cambridge in July of last year, organised on the occasion of Professor Hawking’s 75thbirthday.

Modern theories of the big bang predict that our local universe came into existence with a brief burst of inflation – in other words, a tiny fraction of a second after the big bang itself, the universe expanded at an exponential rate. It is widely believed, however, that once inflation starts, there are regions where it never stops. It is thought that quantum effects can keep inflation going forever in some regions of the universe so that globally, inflation is eternal. The observable part of our universe would then be just a hospitable pocket universe, a region in which inflation has ended and stars and galaxies formed.

“The usual theory of eternal inflation predicts that globally our universe is like an infinite fractal, with a mosaic of different pocket universes, separated by an inflating ocean,” said Hawking in an interview last autumn. “The local laws of physics and chemistry can differ from one pocket universe to another, which together would form a multiverse. But I have never been a fan of the multiverse. If the scale of different universes in the multiverse is large or infinite the theory can’t be tested. ”

In their new paper, Hawking and Hertog say this account of eternal inflation as a theory of the big bang is wrong. “The problem with the usual account of eternal inflation is that it assumes an existing background universe that evolves according to Einstein’s theory of general relativity and treats the quantum effects as small fluctuations around this,” said Hertog. “However, the dynamics of eternal inflation wipes out the separation between classical and quantum physics. As a consequence, Einstein’s theory breaks down in eternal inflation.”

“We predict that our universe, on the largest scales, is reasonably smooth and globally finite. So it is not a fractal structure,” said Hawking.

The theory of eternal inflation that Hawking and Hertog put forward is based on string theory: a branch of theoretical physics that attempts to reconcile gravity and general relativity with quantum physics, in part by describing the fundamental constituents of the universe as tiny vibrating strings. Their approach uses the string theory concept of holography, which postulates that the universe is a large and complex hologram: physical reality in certain 3D spaces can be mathematically reduced to 2D projections on a surface.

Hawking and Hertog developed a variation of this concept of holography to project out the time dimension in eternal inflation. This enabled them to describe eternal inflation without having to rely on Einstein’ theory. In the new theory, eternal inflation is reduced to a timeless state defined on a spatial surface at the beginning of time.

“When we trace the evolution of our universe backwards in time, at some point we arrive at the threshold of eternal inflation, where our familiar notion of time ceases to have any meaning,” said Hertog.

Hawking’s earlier ‘no boundary theory’ predicted that if you go back in time to the beginning of the universe, the universe shrinks and closes off like a sphere, but this new theory represents a step away from the earlier work. “Now we’re saying that there is a boundary in our past,” said Hertog.

Hertog and Hawking used their new theory to derive more reliable predictions about the global structure of the universe. They predicted the universe that emerges from eternal inflation on the past boundary is finite and far simpler than the infinite fractal structure predicted by the old theory of eternal inflation.

Their results, if confirmed by further work, would have far-reaching implications for the multiverse paradigm. “We are not down to a single, unique universe, but our findings imply a significant reduction of the multiverse, to a much smaller range of possible universes,” said Hawking.

This makes the theory more predictive and testable.

Hertog now plans to study the implications of the new theory on smaller scales that are within reach of our space telescopes. He believes that primordial gravitational waves – ripples in spacetime – generated at the exit from eternal inflation constitute the most promising “smoking gun” to test the model. The expansion of our universe since the beginning means such gravitational waves would have very long wavelengths, outside the range of the current LIGO detectors. But they might be heard by the planned European space-based gravitational wave observatory, LISA, or seen in future experiments measuring the cosmic microwave background.

Reference:
S.W. Hawking and Thomas Hertog. ‘A Smooth Exit from Eternal Inflation?’’ Journal of High-Energy Physics (2018). DOI: 10.1007/JHEP04(2018)147


Post link
Being naturalWe often speak about things being ‘natural’ or ‘more natural’ than other things. But wh

Being natural

We often speak about things being ‘natural’ or ‘more natural’ than other things. But what does this claim amount to? What does it mean to be natural?

Natural kinds

The concept of natural kind may be of service to us.

Something is of a natural kind if it can be grouped according to the structure of the natural world. The natural world can be ‘carved at the joints’ (Plato). Good theories, then, cut nature along these joints. Physics, for example, has it that electrons (meat) belong to the natural kind of fundamental particles.

Not so fast

However, human beings have blurred the boundaries of naturalness; for our actions and nature are in a constant interplay. At what point should something be considered artificial or arbitrary instead of natural?

Given our inquisitive and creative ways, we have used technology to synthesise vitamin C and new chemical elements (e.g., einsteinium [Es]). We have created ideal conditions for viruses (e.g., COVID-19) to spread globally—a virus which mutates via us. And while plants (e.g., GMO foods) can be grown from natural resources and share biological attributes with ‘wild’ flora, we have manipulated their DNA.

Are none of these examples natural in your eyes?

Philosophy to the rescue?

It’s tempting to think that only mind-independent things, entirely free of human involvement, are natural. But this approach eliminates a lot. Perhaps some metaphysics and philosophy of science can help us tighten up the definition.

For David Lewis there are ‘perfectly natural’ properties, like those described in physics and laws of nature; they are fundamental, simple, and intrinsic. But less-than-perfectly-natural things also exhibit degrees of naturalness; they are just more complex and abundant.

A more-liberal conception of naturalness is available in the work of Quine, whose natural kinds merely share natural properties. The scope of these natural kinds, however, is liable to becoming enormous. For example, if liquid is a natural kind, we haven’t exactly carved nature at the joints; a huge number of items fit this description.

Of course, philosophy didn’t rescue us.

 ⁂

A helpful concept here is social construction.

Humanscausally construct things (e.g., money) which are real but whose existences aren’t inevitable; for they are contingent on human decision-making. They are not natural but social kinds.

But we also constitutively construct things. These are things which necessarily stand in relations to human features and activity.

Take black and woman as two potential human kinds. Are they autonomously real, are they socially constructed, or are they both? While each is said to instantiate its own collection of biological properties, parts of their realities seem to depend on aspects of human culture, such as oppression and privilege, as well as causal factors, such as geography and gender norms.

(Pictured: What of nature survives the influence of ‘man’? [Francesco Paggiaro/Pexels])


Post link
Studying black holes … with waterLet’s continue the fun with analogies! Below we expand on the use o

Studying black holes … with water

Let’s continue the fun with analogies! Below we expand on the use of a particular analogy from science which we briefly discussed in a recent article.

The image above depicts the supermassive black hole M87*, which sits at the centre of the M87 galaxy some 55 million light years away. It was compiled from radiofrequency signals collected across several telescopes over two years. It is the first of its kind.

In the image we are given direct evidence of Einstein’s theory of general relativity. The black hole is dark, as predicted, since radiation cannot escape black holes once it’s within their boundaries. Moreover, the accretion disk (bright) around the black hole, from which radiation is emitted, is of a lopsided-doughnut shape. This varied brightness results from intense gravitational warping. And, because of rotation, there’s a kind of relativistic Doppler effect going on: radiation is boosted in the direction of rotation towards Earth.

Now, here’s a funny thing of relevance to us: some scientists and philosophers claim we can study black holes by investigating … [drum roll] … plain old water. One argument goes like this.

Inanalogue experiments, involving surface-water waves, something about black holes is realisable in surface-water waves’ ‘white holes’. Therefore, black holes can be modelled by analogy because their models and the models of white holes are related by the assumptions they share.

The analogy is not defined by a material relation. Nonetheless, thermal aspects of Hawking radiation (named after Stephen Hawking), which is released at black-hole boundaries, can be simulated in water. The analogy owes itself to ‘syntactic isomorphism’ between models, whereby the relation is confirmed in a ‘Bayesian sense’.

‘Analogue simulation’ is still a powerful experimental tool which can be used in a similar sense to computer simulation. Isn’t this cool? Or are such analogies fraudulent in some way because they only offer crude and opaque approximations via models which are often proven incorrect?

(Picture credit: Event Horizon Telescope project.)


Post link
Condensed Matter - an Interview with Sam Kimpton-Nye In our interview with Sam we discussed his new

Condensed Matter - an Interview with Sam Kimpton-Nye

In our interview with Sam we discussed his new metaphysics and philosophy of science podcast, Condensed Matter; tips for getting into philosophy; the fictional character he identifies most with; and more.

Read our interview with a philosopher here!


Post link
Does Physics Rule the Sciences?Here’s one reason why it may not:Every biologist is, at heart, a chem

Does Physics Rule the Sciences?

Here’s one reason why it may not:

Every biologist is, at heart, a chemist.

And every chemist is, at heart, a physicist.

And every physicist is, at heart, a mathematician.

And every mathematician is, at heart a philosopher.

And every philosopher is, at heart, a biologist.

Read more here.


Post link
loading