This might be considered a bug, but the behavior of `NIntegrate`

follows the `"GlobalAdaptive"`

strategy. The integral, in fact, exposes a weakness in the strategy.

With the exact form with `(0 - x)^2`

, `NIntegrate`

takes advantage of the symmetry to multiply the integrand by `2`

and integrate over `{0, Infinity}`

.

With the inexact form with `(0.0 - x)^2`

, `NIntegrate`

does not recognize the symmetry and integrates over `{-Infinity, Infinity}`

. At first, the interval is split in half, at `x == 0`

. The integrand underflows to zero at all sampling points in each interval, yielding estimates of `0.`

for the integral and error. The precision and accuracy goals cannot be met if the integral is zero and we have `AccuracyGoal -> Infinity`

. In such a case, `NIntegrate`

will try a couple of subdivisions, as long as the integral and error remain zero, before returning `0.`

and giving a `NIntegrate::izero`

warning. In the OP’s case, `NIntegrate`

first tries this on `{0, Infinity}`

. After the first subdivision, the integral and error estimate turn out to be `1.70951*10^-109`

(both, that is 100% error) instead of zero.

We now have a new situation:

- The total integral is positive,
`1.70951*10^-109`

.
- The error over the negative reals is
`0.`

.
- The relative error is 100%, so further subdivisions will be carried out on the interval with the greatest error.

In the `"GlobalAdaptive"`

strategy, the further subdivisions will never subdivide the integral over the negative reals because it will always have the smallest error estimate. Therefore, `NIntegrate`

will calculate the integral over `{0, Infinity}`

accurately as with the exact form, but it will accept `0.`

for the integral over `{-Infinity, 0}`

. So the value of the integral is exactly half of what it should be.

This analysis suggests other workarounds:

```
NIntegrate[integrand, {x, -Infinity, Infinity},
Method -> "LocalAdaptive"]
NIntegrate[integrand, {x, -Infinity, Infinity},
MinRecursion -> 4]
(* -410.318 *)
```

Note that if `AccuracyGoal -> ag`

is set so that `ag < Accuracy[0.]`

, `NIntegrate`

will return `0.`

for the integral. Note `Accuracy[0.] == -Log10@$MinMachineNumber + $MachinePrecision`

, which is about `323.6`

.

I said this might be considered a bug, although the behavior fits exactly with the global adaptive strategy. One could imagine that `NIntegrate`

might be improved by subdividing any interval over which the integral is zero, or giving a warning whenever it encounters such an interval. If it had done it once, the error estimate would have been extremely small `10^-109`

compared to the rest of the integral. Probably the intervals would not have been subdivided further, and the integral would still be only `-205`

. The question then is when should `NIntegrate`

give up subdividing? It's not so easy to fix in a way that is efficient.

This analysis was carried out with the help of the IntegrationMonitor (https://mathematica.stackexchange.com/questions/26401/determining-which-rule-nintegrate-selects-automatically/96663#96663) option. For instance:

```
NIntegrate[integrand, {x, -Infinity, Infinity},
IntegrationMonitor :> ((Print@
Thread[Through[#["OriginalBoundaries"]] ->
Through[#["Error"]]]) &)]
```

*Original guesses*

The main difficulty is the effective support of $f$ where $|f(x)| >\mkern-4mu> 0$ is so small compared to the scale `NIntegrate`

expects.

Here are some hypotheses that I won't have time to fully explore:

It could be a numerics thing, since `0.0`

might demote the function value to `MachinePrecision`

, whereas `0`

would not. When `NIntegrate`

has to make small subdivisions, it increases the precision of the variable. The presence of other machine-precision constants should have a similar, but perhaps not exactly the same, effect. This casts doubt on this hypothesis, but it's the only hypothesis I have at this point that accounts for the difference in the value of the integral.

Or, given that the integrals are off by a factor of almost exactly 2, it could be a preprocessing thing, probably `"EvenOddSubdivision"`

since the integrand is an even function. The difference between how the integrands are processed might still be explained by a numerics issue.

Some fixes, each of which focus in on the effective support of the integrand:

```
NIntegrate[f[x], {x, -Infinity, -1/10, 1/10, Infinity}]
NIntegrate[f[x],
{x, -Infinity, 0, Infinity},
Method -> {"PrincipalValue",
"SingularPointIntegrationRadius" -> 10^-1}]
```

The correct value seems to be `-410.318`

.

Another fix that "corrects" the numerics issue, whatever it may be:

```
NIntegrate[SetPrecision[f[x], 17],
{x, -Infinity, Infinity}]
```

*Update:*
After looking at Bob's answer, setting precision essentially does the same thing: that is, both cause `0.0`

to become `0`

. As a "fix," it may not seem much, since it's easier to write `0`

to begin with. But if the integrands are not written but generated programmatically, then rationalizing or raising the precision is probably the easiest way to go. It doesn't take care of a very small interval of support though.