Abstract
In this paper we investigate if sentences presented as the result of the application of statistical models and artificial intelligence to large volumes of data – the so-called ‘Big Data’ – can be characterized as semantically true, or as quasi-true, or even if such sentences can only be characterized as probably quasi-false and, in a certain way, post-true; that is, if, in the context of Big Data, the representation of a data domain can be configured as a total structure, or as a partial structure provided with a set of sentences assumed to be true, or if such representation cannot be configured as a partial structure provided with a set of sentences assumed to be true.