1 Introduction

In current discussions on knowledge transfer and distribution, the focus is mainly on the transfer and travel of finished knowledge products, such as models, templates (e.g. Humphreys, 2019; Knuuttila & Loettgers, 2020), and facts (e.g. Howlettt & Morgan, 2011). Other than data (e.g. Leonelli & Tempini, 2020), the transfer of “raw” knowledge products have not found much attention in the literature. This article discusses the selection of a specific kind of these raw materials, namely the mathematical objects used for model design. These mathematical objects are, for example, the simplest mathematical functions (of a variable, say x) such as a second-degree power function (x2) or a simple trigonometric function such as cos(x).Footnote 1 This kind of knowledge transfer plays an important role in modeling, especially when a satisfactory template is not (yet) available, which is usually the case when a phenomenon is modelled for the first time, or when designing a template itself.

The starting point for discussing the selection of mathematical objects is the view that models are seen as instruments of investigations (Morgan & Morrison, 1999), that modelmaking is the integration of several “ingredients” in such a way that the resulting model meets certain a priory criteria of quality (Boumans, 1999), and that the process of model building can be epistemologically compared to the process of instrument making (Boumans, 2005). The ingredients mentioned by Boumans (1999) are metaphors, analogies, mathematical concepts and techniques, stylized facts, data, and policy views. As the latter account focusses on the integration process, it does not discuss the considerations involved in the selection of the ingredients. However, when designing a new instrument, the choice of the materials from which the instrument will be made is a critical aspect of its design. This article shows that for designing a mathematical model, the selection process of the appropriate mathematical ingredients is equally critical.

A consequence of this view of modeling is that not each individual mathematical ingredient used to build a model has to be representational, in the sense that it itself represents a part or aspect of the target system. In first instance, the ingredients are selected in order to ensure that the model meets predetermined quality criteria. Even if the purpose of the model itself is representation, this does not mean that all ingredients of the model must be representational, including the mathematical forms. Take a thermometer for example: its purpose is to measure temperature, and therefore some of it components should be sensitive to temperature and thus represent temperature. But for a thermometer to function properly, the glass container with the temperature-sensitive liquid must not or hardly be sensitive to temperature.

To give a broader context to the idea that not every mathematical object in a model needs to be representational for the model to fulfill its function, I first discuss Hertz’s (1956) criteria of “appropriateness.” To make a model appropriate, Hertz argued that we cannot avoid using “empirical empty” components. A similar argument was made by Cartwright (1983) in her simulacrum account of models, where these components were referred to as “elements of fiction.”

To gain a better understanding of how mathematical forms are selected to design a model, I explore a textbook on material selection in mechanical design. Then I will discuss the first modeling efforts in economics, namely the attempts in the 1930s to model the business cycle mechanism. This period is deliberately chosen because no templates were yet available with which a business cycle model could be designed.

2 Appropriateness

The process of model-making in economics is often labelled as “formalization.” In her account of how economists make models, Morgan (2012, pp. 19–20) makes a useful distinction between two meanings of formalization in order to understand modelmaking in economics. If we think about its active form, “formalize” means to give form to, to shape, or to provide an outline of something. The second meaning can be clarified if we take its passive form “formal.” Formal implies something rule bound, following prescribed forms. According to Morgan, modelmaking involves both meanings: “models give form to, in the sense of providing a more explicit or exact representation of our ideas about the world, and in creating those forms we make them subject to rules of conduct or manipulation” (p. 20).

These rules of conduct or manipulation, which are the rules for reasoning with a model, according to Morgan (2012, p. 26, italics added) arise from two distinct aspects of the model: First, these rules must conform to “the kind of stuff that the model is made from, or language it is written in, or the format it has,” or in other words “they are given and fixed by the substance of the model.” Second, these rules are also determined and constrained by the subject matter represented in the model. This article focusses on the first aspect of rules, namely the constraining features of the model’s substance on the kind of reasoning one can do with the model. This implies that when selecting the mathematical ingredients one must also take into account the kind of reasoning one wishes to perform with the model.Footnote 2 This aspect of model making has received less attention than the representational aims in the selection of the mathematical forms.

The selection of the right mathematical ingredients that enable the preferred way of working with a model was discussed in one of the first accounts of models. Hertz (1956) formulated three criteria for the evaluation of a model: logical permissibility, correctness and appropriateness.Footnote 3 Hertz considered correctness as the “fundamental requirement”: models are incorrect “if their essential relations contradict the relations of external things” (p. 2). In modern terminology, the model should accurately map into the target system. Hertz thought about this requirement in terms of the model’s predictive performance, but one could also state this more generally, in the sense that a model must be empirically validated. It should, however, be emphasized that the requirement of correctness only applies to the model as a whole and not to the individual equations or terms of the model.

The second criterion, logical permissibility, is an analytic criterion: a model is not permissible if it “contradicts the laws of thought” (p. 2). In other words, the mathematics or logic used to formulate the model should not consist of any contradiction. This refers to the rule-bound aspect of any formalization mentioned above. According to Hertz, we can decide “without ambiguity” whether a model meets these two criteria. But he was not so optimistic about the requirement of appropriateness.

What he meant by the criterion of appropriateness is not so clear. Its meaning was given by first separating it into two sub-criteria: distinctness and simplicity: of two models of an object, one is more distinctive if it “pictures more essential relations of the object” (p. 2). And of two models of equal distinctness, the more appropriate is the simpler one, which contains “the smaller number of superfluous or empty relations” (p. 2). Hertz did not elaborate on what he meant by empty or superfluous relations, but he explicitly noted that “empty relations cannot be altogether avoided: they enter into the images because they are simply images,—images produced by our mind and necessarily affected by the characteristics of its mode of portrayal” (p. 2). According to Lützen (2005, p. 92) the issue of simplicity is related to the avoidance of “conceptual and mathematical complication,” and involves “such properties as intuitive clarity, elegance, and beauty” (p. 93). In other words, these empty relations were necessary to facilitate analysis and tractability.

In short, Hertz emphasized that in addition to the more obvious criteria of empirical validity and mathematical correctness, a mathematical model must also be appropriate. This last criterion implies that one cannot avoid using mathematical concepts that enable the model to be a Bild of a phenomenon but that are “empty,” that is, they have no direct representational relation with the target system. This aspect of model building was also emphasized by Cartwright (1983) in her simulacrum account of explanation: “Some properties ascribed to objects in the model will be genuine properties of the objects modelled, but others will be merely properties of convenience” (Cartwright, 1983, pp. 153–154). These properties of convenience have a “powerful organizing role”, for example they “bring the objects modelled into the range of the mathematical theory”. Some of these properties will be real, some are idealizations, but some properties are “not even approached in reality. They are pure fictions.”

The role of and the need for these properties of convenience can be seen more clearly with a model that is not a mathematical model but a physical 3-D model made of Perspex, water, springs, wire etc., the Newlyn-Phillips machine, a hydraulic machine representing a Keynesian economy, in which the circulating water represents money (Phillips, 1950). One of the most relevant characteristics of 3-D physical objects is that they are subject to gravity. This hydraulic machine worked because of this force, but also needed an electronic motor to pump the water up. Both gravity as the electronic motor had no economic equivalents, and so the motor was hidden (since gravity is already invisible). In addition to the motor, the machine consisted of many other parts, hidden or not, that had no economic equivalents but were critical to the working of the machine. Such a model is not expected to represent the entire world, nor should every part of the model be representational. There are always things that are likely to be untranslatable or just plain wrong. But these elements do not necessarily cause difficulties in the functioning of the model. Rather, they are installed to enable its functioning.

This physical model also makes us more aware of the material aspect of model building. Morgan and Boumans’s (2004) study of the modelbuilding process of this 3-D hydraulic machine showed that modelbuilding involves dealing with both a great many constraints imposed from the physical side and a whole lot of commitments about how the economics is physically represented.

But these are not separate steps: each modelling decision involves both a physical constraint and an economic commitment at the same time. To make commitments about the analogue to the economy at the same time as working within the physical constraints requires tremendous creativity (Morgan & Boumans, 2004, p. 384)

Working with mathematics means taking the same kind of constraints into account. Just as one has to choose which material is both strong and transparent to carry the colored water and keep it visible, the different kinds of mathematical objects have to be chosen for the model to fulfill its purpose. This constraining aspect is typical for materiality. The substance aspect of materiality constraints the kind of things one can do with material. Wood does not conduct electricity, but iron does. According Fleischhacker (1992), this is because substance has structure, and because mathematical objects also have structure, he characterizes mathematical objects as “quasi-substantial.” This structural aspect of mathematical objects restricts the range of molding.

This structural aspect of mathematical objects means that in order for models to be appropriate, one must think about the kind of mathematics that will allow the kind of reasoning one is aiming for. Since every mathematical object has its own structure, with its own structural properties, one needs to know these properties before deciding which of them can be useful for the model in question.

3 Materials selection in mechanical design

To draw on material selection for a better understanding of how mathematical forms are selected, this section discusses a widely used textbook account on materials selection in mechanical design. Ashby’s textbook, Material Selection in Mechanical Design, presents a systematic procedure for selecting materials and processes leading to the subject which best matches the requirements of a design (1999, p. xi). Central to this procedure is the interaction between function, material, shape and process. The book’s content is structured around this interaction, see Fig. 1.

Fig. 1
figure 1

Interaction between function, material, process and shape (Source Ashby, 1999, p. 2, Fig. 1.1)

This interaction was sketched as follows: The selection of a material and the process of making the subject cannot be separated from the choice of shape. The term shape refers to both the external shape (the macro-shape) and the internal shape (the micro-shape) of the material. To achieve the shape, the material is subjected to processes that include primary forming processes (such as casting and forging), material removal processes (machining, drilling), finishing processes (such as polishing) and joining processes (such as welding). Function dictates the choice of both material and shape. The process is influenced by the material: by its formability, machinability, weldability, heat-treatability and so on. Process interacts with shape: the process determines shape, size, precision and cost. The interactions are two-way: specification of shape restricts the choice of material and process; but equally the specification of process limits the materials you can use and the shapes they can take (see Ashby, 1999, p. 13).

The first part of Ashby (1999), and most relevant for this article, discusses material. It is common practice in engineering to classify materials into six broad classes: metals, polymers, elastomers, ceramics, glasses and composites. The members of a material class have features in common: similar properties, similar processing routes and often similar applications. Each material can be thought of as having a set of properties, such as density, modulus, strength, toughness, and thermal conduction. But it is not a material, per se, that the designer is looking for; it is a specific combination of these properties, a specific property-profile. The material name can then be seen as the identifier for a particular property-profile (p. 22).

Because “material properties limit performance,” there is a need for a survey of properties “to get a feel for the values design-limiting properties can have” (p. 32). To simplify the survey of potential candidate materials, Ashby (1999) presents a series of property charts (which form the core of this textbook).Footnote 4 Each chart plots one property against another, mapping the fields in property-space occupied by each material class.

An example of such a chart is Fig. 2, where Young’s modulus, E, is plotted against density, ρ, on logarithmic scales.Footnote 5 The range of the axes is chosen to include all materials, from the lightest flimsiest foams to the stiffest, heaviest metals. It then turns out that data for a given class of materials (for example polymers) cluster together on the chart. Data for one class can be enclosed in a property envelope, as the figure shows. The envelope encloses all members of the class.

Fig. 2
figure 2

An example of a material property chart: young’s modulus, E, is plotted against the density, ρ, on log scales. Each material class occupies a characteristic part of the chart. (Source Ashby, 1999, p. 34, Fig. 4.2)

These material property charts usefully display the properties of materials. The charts summarize the information in a compact way. They show the range of any given property and identify the material class associated with segments of that range. According to Ashby (p. 63) the “most striking feature” of these charts is the way members of a material class cluster together.

The selection process works then as follows: A material has properties, such as density and strength. A design requires a certain profile of these, for example a low density and high strength. The problem is identifying the desired property-profile and then comparing it with those of real engineering materials to find the best match. This is done by first screening and ranking the candidates to give a shortlist, then seeking detailed supporting information for each shortlisted candidate, enabling a final choice.

The immensely wide choice is first narrowed by applying property limits that screen out the materials that cannot meet the design requirements. Further narrowing is then achieved by ranking the candidates by their ability to maximize performance. Performance is generally not limited by a single property, but by a combination of them. For example, the best materials for a light stiff tie-rod are those with the largest value of the specific stiffness, E/ρ, where E is Young’s modulus and ρ density, see Fig. 2.

Combinations like these are called material indices: they are groupings of material properties that, when maximized, maximize some aspect of performance. There are many such indices. They are derived from the design requirements for a component through an analysis of function, objectives and constraints. The materials charts, such as Fig. 2, are designed for use with these criteria. Property limits and material indices are plotted on them, isolating the subset of materials that are the best choice for the design. Property limits isolate candidates which are able to do the job, material indices identify those among them which can do the job well.

Supporting information differs significantly from the property data used for screening. Typically it is descriptive, graphical or pictorial: case studies of previous uses of the material, details of its corrosion behaviour in particular environments, information of availability and pricing, experience of its environmental impact. The final choice between competing candidates will often depend on local conditions: on the existing in-house expertise or equipment, on the availability of local suppliers, and so on. A systematic procedure cannot help here; the decision should instead be based on local knowledge.

4 Modeling the business cycle in the 1930s

To investigate how the selection of mathematical objects takes place in model design, it is best to study a period when no appropriate templates were available yet, or in other words, when a first template had yet to be constructed. The reason to explore such a pre-template period is to show that there is a distinction between the selection of templates and the selection of mathematical materials. This distinction will be clarified after the exploration of this period of model design in economics.

In economics, this pre-template period is the 1930s, when modeling was still a new practice. It is the period when mathematical-minded economists attempted to model the business-cycle mechanism. The mathematical theories available at that time, equilibrium theory and utility theory, were not considered suitable for modeling the business cycle. Equilibrium theory was a non-dynamic theory and, unlike utility, the business cycle is a macro phenomenon. These economists were in search of a macro-dynamics, that is, they wanted to model the business cycle mechanism and therefore were in search of the kind of formalism that could best represent this kind of dynamic behavior.

The first economist to write about the kind of mathematics needed for modeling macro-dynamic behavior is Frisch. In 1929, Frisch published an article on the meaning of static and dynamic.Footnote 6 In this article he first defined the “aim” of a “dynamic law” as “to describe how a situation changes from one point in time to the next” (Frisch, 1992, p. 391). These laws describe succession of situations in time. In discussing the distinction between statics and dynamics, Frisch emphasized that this distinction “refers to the analytical method, not to the nature of the phenomena. We may thus speak of static or dynamic analysis, but not of a static or dynamic phenomenon. Phenomena as such are neither static nor dynamic” (p. 392).

In his explanation of the kind of mathematical forms needed for dynamic analysis, Frisch (1992) focused on the essential role of the rate of change with respect to time. For the more complicated dynamic problems, one needs not only rates of change of the first order (such as velocity), but also rates of change of the second order (such as acceleration). The two mathematical forms representing these two rates of change were, in Newton’s dot notation for derivatives: \(\dot{x}(t)\) and \(\ddot{x}(t)\). And then, based on these two notions, he defined a “dynamic law” as follows: “it involves the notion of rate of change or the notion of speed of reaction (in terms of time)” (p. 394). In the accompanying footnote he added that “a variable and its rate of change (in terms of time) must occur in one and the same argument” (p. 394, fn. 7).

Another founder of mathematical modeling in economics, Tinbergen, developed Frisch’s definition of dynamic analysis into a methodology for mathematical modeling.Footnote 7 In his inaugural lecture on appointment as professor at the Rotterdam School of Economics, Tinbergen presented a survey of the business cycle research that had already taken place and a kind of program that still had to be done. According to him, the central question for business cycle is: “is it possible for an economic community to show a swinging movement without the external non-economic factors on which it is based showing such a movement” (Tinbergen, 1933a, p. 8).Footnote 8 The answer is yes, provided the relation between economic variables is dynamic. In addition to the rate of change, Tinbergen mentioned two other elements that make an equation dynamic. An equation is dynamic if it contains “a lag between two variables,” or “a variable that has the character of a velocity of another variable,” or “a variable that has the character of a cumulation of another variable” (p. 5). The mathematical forms that represent these elements are: \(x(t-\theta )\), \(\dot{x}(t)\), and \({\sum }_{t}x(t)\) or \(\int x\left(t\right)dt\), respectively.

In his earliest work on the business cycle (e.g. Tinbergen, 1933b), Tinbergen assumed that the main cause of the business cycle was the long period required for production, with the consequence that supply lagged behind the market conditions. Therefore, he considered the lag as the most important element in a dynamic equation to model the business cycle mechanism. That is why he had studied all kinds of equations in which a relationship with a lag term was the starting point,

$$ax\left(t\right)+bx\left(t-\theta \right)=0,$$
(1)

and to which he added all kinds of other terms, such as a first order differential, \(\dot{x}(t)\), or an integral, \(\int x\left(\tau \right)d\tau\). The addition of each term to the lag relation (1) was legitimized by giving it a specific economic interpretation, while the time lag, θ, represented production time. For example, the first derivative was added to the relation to represent speculation. But none of these equations lead to the kind of cyclical behavior that could represent the business cycle as it was perceived by Tinbergen: a persistent cycle with a period of about eight years. Each of them implied either an unrealistic production time or a periodicity that was too short or too long. Only the combination

$$\dot{x}\left(t\right)=-ax\left(t-\theta \right)$$
(2)

led to satisfactory results. He arrived at this equation when analyzing the shipbuilding market (Tinbergen, 1931). With a production time of two years, θ = 2, and with a realistic value of 1.57, the resulting cycle has a period equal to eight years.

This early approach by Tinbergen where each mathematical term must have an economic meaning, that is, each mathematical object had to represent a part or an aspect of the target system (e.g. θ production time), was not successful. It led to only one equation that could represent the dynamics of the shipbuilding market. But, the business cycle could not be assumed to have the same specific mechanism represented by Eq. (2). These early attempts forced Tinbergen to change the modeling methodology. Rather than requiring that each term, such as the lag term, \(x(t-\theta )\), or first-order differential, \(\dot{x}(t)\), to have an economic meaning, the new methodology required that one first looked for the right mathematical forms and only then for the economic meaning of these forms.

This new methodology was first introduced by Tinbergen at the 1933 Leiden meeting of the Econometric Society (Marschak, 1934, pp. 187–188), in his presentation titled “Is the theory of harmonic oscillations useful in the study of business cycles?” This new approach was the result of the dissatisfying outcome of taking a production lag as starting point. This dissatisfaction was also expressed in his inaugural lecture: the disadvantage of postulating lags is that they must be given in advance and have a fixed length, “this has been repeatedly felt as a too rigid representation of reality” (Tinbergen, 1933a, p. 13). The business cycle should not be explained simply by a predetermined time lag, but by the interaction of various other possible influences. For the analysis of such interaction, the calculus, which he called the theory of harmonic oscillations, was proposed as most useful: “can quantities with an integral character and a differential character, respectively, be found and do these quantities play an important role in the business cycle?” (p. 15).

Because of his background in physics, he knew that it is also possible to generate a cycle when both velocity and integral appear in an equation. Namely, the derivative of this equation is a second-order differential equation:

$$a\dot{x}\left(t\right)+bx\left(t\right)+c{\int }_{0}^{t}x\left(\tau \right)d\tau =0\Rightarrow a\ddot{x}\left(t\right)+b\dot{x}\left(t\right)+cx\left(t\right)=0$$
(3)

Therefore, Tinbergen proposed to use “a more indirect way” by starting from the mathematical nature of harmonic oscillations and only then searching among the main economic relations those that are likely to fit into a second-order differential equation with suitable coefficients. Accordingly, he classified economic relations into two groups: (1) “differential phenomena,” i.e., functions of the rate of change \(\dot{x}(t)\), and (2) “integral phenomena,” functions of \(\int xdt\). Only then did he enumerate all kinds of economic interpretations for both groups, such as speculation as a possible interpretation for the differential.

This new methodology was published in a survey on “quantitative business cycle theory” (Tinbergen, 1935).Footnote 9 In this survey, Tinbergen aimed to outline systematically the criteria that a business cycle model should meet. “The aim of business cycle theory is to explain certain movements of economic variables. Therefore, the basic question to be answered is in what ways movements of variables may be generated” (p. 241). To arrive at such business cycle theory, he emphasized the core of the new methodology: to make a distinction between the mathematical form of the equations and the economic interpretation of them.

The mathematical form determines the nature of the possible movements, the economic sense being of no importance here. Thus, two different economic systems obeying, however, the same types of equations may show exactly the same movements. But, it is evident that for all other questions the economic significance of the equations is of first importance and no theory can be accepted whose economic significance is not clear. (Tinbergen, 1935, p. 242)

Consequently, the criteria outlined only applied to the mathematical forms. The first criterion repeated Frisch’s (1992) definition of a dynamic equation, which is the case “when variables relating to different moments appear in one equation” (Tinbergen, 1935, p. 241). The most general form of a dynamic equation is then

$$\sum_{i=1}^{n}{a}_{i}x(t-{t}_{i})+\sum_{i=1}^{n}{b}_{i}\dot{x}(t-{t}_{i})+\sum_{i=1}^{n}{c}_{i}\underset{0}{\overset{t-{t}_{i}}{\int }}x\left(\tau \right)d\tau =0$$
(4)

In order for this equation to represent a cycle mechanism, the coefficients must satisfy two “wave conditions.” The first wave condition indicated that the solution to this equation must consist of a sine function of the following form: \(C{\lambda }^{t}\text{sin}(\omega t)\), so that the time shape of \(x(t)\) is cyclic. The second wave condition, which he called the “long wave condition,” dictated that the cycle period be long compared with the “time units” and that the cycle should not differ “too much from an undamped one” (p. 280).Footnote 10 As a first approximation to this last condition, Tinbergen put λ = 1 and ω = 0. Both conditions together implied that \(\sum_{i=1}^{n}{c}_{i}=0\). In other words, this dynamic equation “only then lead to long, not too much damped waves when integral terms are of small importance” (p. 281). Thus the decision not to include integral terms in a business cycle model was not based on economic theoretical or empirical considerations but only on mathematical requirements.

A consequence of splitting the modeling procedure into a part for mathematical formation and a part for economic interpretation is that any equation representing a “long wave,” may contain “empty” terms. Equations developed to satisfy the wave conditions may contain terms for which no economic meaning can be found, but which are nevertheless necessary to satisfy the wave conditions.

The first to take up Tinbergen’s new methodology and develop it into a general methodology of mathematical modeling in economics was Samuelson, and instead of Tinbergen he became the well-known proponent of it.Footnote 11 The article (Samuelson, 1939) in which he picked up Tinbergen’s methodology was an analysis of the “qualitative behavior” of national income. “The present problem is so simple that it provides a useful introduction to the mathematical theory of [Tinbergen’s] work” (Samuelson, 1939, p. 78, fn. 1). The behavior of national income, Yt was determined by a simple model of three equations, of which the reduced equation is:

$${Y}_{t}=1+\alpha \left(1-\beta \right){Y}_{t-1}-\alpha \beta {Y}_{t-2}$$
(5)

The bulk of the article was an investigation of the dynamic properties of this equation related to the values of α and β. It showed that the map of possible values of α and β can be divided into four regions, A, B, C, and D, representing “qualitatively different types of behavior” (p. 77), see Fig. 3.

Fig. 3
figure 3

Diagram showing boundaries of regions yielding different qualitative behavior of national income (Source Samuelson, 1999, p. 78, Chart 2)

The behavior in Region A is that Yt will asymptotically approach a specific value when time t increases; the behavior in Region B is that of a “damped oscillatory movement”; in Region C the behavior will be an “explosive, ever increasing oscillation”; and in Region D national income will be “ever increasing” (p. 77).

Similar to the property charts in mechanical design, e.g. Figure 2, this chart summarizes the information of the qualitative behavior of Eq. (5) for the range of the coefficients α and β by showing the regions of the four kinds of qualitative behavior.

The inclusion of a mathematical object, such as a first- or second-order differential, an integral, or a time lag, is required to make an equation dynamic. But to be an equation describing cyclical behavior, the combination of these dynamic elements must satisfy the wave conditions. For this reason, according to Tinbergen, the integral should not be part of the cyclic equation. The nature of the behavior resulting from a specific combination of the dynamic elements is determined by their coefficients. The resulting behavior, the solution of the equation, depends on the values of these coefficients, whether they are zero or not, whether they are positive or negative, and on how these values are combined. They determine the “qualitative behavior” of an equation.

In this sense, the selection of mathematical objects is similar to the selection of materials in mechanical design. Mathematical objects also have properties that have to be taken into account when designing a model for a specific purpose. The property profile one is looking for in business cycle modeling is a particular equation consisting of a variable, say \(x(t)\), to which specific “dynamic” terms have been added, such as \(x(t-\theta )\) or \(\dot{x}(t)\) and that meets the “wave conditions.” Such a dynamic equation is a material index, that is, a combination of dynamic properties. The values of the equation’s coefficients determine its specific property profile. The designer of a business cycle model then looks for a property-profile that meets the wave conditions.

If a material index consists of only two properties, these properties can be plotted against another for the full range of their values. Such a map can then show which area meets the required property-profile. In other words, Samuelson’s map showing the regions yielding different qualitative behavior is similar to a material property chart, cf. Figure 2. The combined region B and C shows all the materials that are cyclic. But to satisfy Tinbergen’s long wave condition, it is only the material to be found on the curve representing \(\alpha =\frac{1}{\beta }\).

However, these mathematical objects, that is, these various differential, integral or lag terms should not be confused with formal templates. To show this, I will first discuss Humphreys’s account of formal templates and then discuss the differences with my material account.

5 A form is not necessarily a template

A formal template is described by Humphreys (2019, p. 114) as a mathematical form “having no interpretations beyond a mathematical interpretation.”Footnote 12 It is a “pattern that can serve as common starting point for the development of a product but that can be adapted for the purpose at hand” (p. 116, fn. 20). Thus, at first glance, formal templates are no different from the mathematical objects as discussed above. But it is their different kind of applicability that distinguishes templates from simply material elements.

In the context of discussing knowledge transfer across scientific disciplines, Humphreys (2019) revisits his earlier account on templates (Humphreys, 2004) and introduces a distinction between theoretical templates and formal templates. While the theoretical templates—“general representational device[s] occurring within a theory” (2019, p. 114)—were discussed in his earlier publication, the concept of formal templates were introduced to account for knowledge transfer. Formal templates have “an explicit set of conditions for applications” (p. 115) and the transfer of a template “rests on the satisfaction of the construction assumptions in the new domain” (p. 115).

The possibility of transfer and application of a formal template into a new domain was clarified by Humphreys (2019, p. 116) in the following way: while a formal template has no empirical content, the empirical content involved in an application is entirely contained in the mapping from the formal template to a target system. This separation of mapping and the template is used to explain how a template can travel from domain to domain, in four “premises” (p. 117): Let T be a formal template, and M1 and M2 mappings onto system S1 and S2,

  1. (1)

    T is a formal object that can be successfully applied to more than one system.

  2. (2)

    T retains its identity when mapped on to an empirical system S, whatever S may be.

  3. (3)

    T + M1 and T + M2 are representations.

  4. (4)

    A sufficient condition for a representation X to have different empirical content than a representation Y is that X makes at least one empirical prediction that Y does not, or vice versa.

In other words, the applicability of a template is completely defined in terms of giving it empirical content by mapping it onto a target system.

However, the attempts to model the business-cycle mechanisms in the 1930s show a different kind of knowledge transfer than Humphreys’s template account. In the first place, the mapping (M) cannot be separated from a mathematical form (T) when developing a representation of the business cycle mechanism. Mapping is not only fulfilling an explicit set of application conditions but also considering the more complex interaction between “function, material, shape and process” as in mechanical design: trying out various selections of mathematical forms (materials) by assessing the properties (qualities) of the different combinations of each selection to see if these combinations meet the representational requirements. There was no template ready to use, one has yet to figure out what kind of mathematical forms could be used, in a kind of trial-and-error process. This whole process of considered selection of forms and then trying out different combinations of them does not appear to play a role in the template account.

A second difference in knowledge transfer between templates and materials is that a prerequisite for the successful application of a formal template is its tractability. Although this tractability was more emphasized in Humphreys’s earlier account on computational templates (Humphreys, 2004, p. 61), his account of formal templates also implies this. More precisely, this aspect of tractability is a distinguishing feature between any form composed of mathematical objects and a template: not every composition of mathematical elements is a template. A resulting form can only be used as a template if it is indeed tractable. This distinction can be clarified by the same pre-template period of business-cycle modeling.

The problem with the mixed differential-difference equations is that when they were introduced there was no mathematical theory available to make them tractable. Systematic treatments of mixed differential-difference equations did not appear until the early 1950s, such as Bellman and Cooke (1963). However, various mathematical aspects of these kinds of equations were discussed in the 1930s, notably in Econometrica, by Frisch and Holme (1935), and James and Belz (1936, 1938a, 1938b).

In fact, the discussions in Econometrica centered around one specific equation, namely the reduced form equation of the model of the business cycle (Kalecki, 1935), which Kalecki had presented at the 1933 Leiden Econometric Society meeting:

$$\dot{x}\left(t\right)=ax\left(t\right)-bx(t-\theta )$$
(6)

To analyze this equation, Kalecki had transformed it into Tinbergen’s shipbuilding equation (Eq. 2), \(\dot{x}\left(t\right)=-ax(t-\theta )\), so that he could base his analysis on Tinbergen, 1931. Because Frisch saw that Kalecki’s equation is “apt to occur in various kinds of dynamic economic problems,” Frisch and Holme (1935) analyzed a more general version of this equation. While Kalecki had discussed it for a limited set of values of the parameters, namely only for those values that would ensure that the cycle is undamped, Frisch and Holme (1935) assumed that the values of a, b, and θ are positive. James and Belz (1936) then analyzed the more general case where the parameters may have any real value.Footnote 13 Two years later they published an analysis (1938a) for the cases where θ does not have one fixed value but is distributed over all positive values and an analysis (1938b) presenting a general algebraic solution method of Kalecki’s equation.

So, due to the lack of a mathematical theory of difference-differential equations, several analyses of Kalecki’s equation appeared in Econometrica. While Kalecki used Tinbergen’s shipbuilding equation as template, in the sense of a form made tractable by Tinbergen, the larger community of model builders did not consider it general enough. Kalecki’s equation was more general than Tinbergen’s and was therefore considered a potential template for business-cycle modeling. But to be used as such, it first had to be made tractable. The various analyses of the characteristics of its solutions were therefore considered relevant. But Kalecki’s Eq. (6), or more generally mixed-difference-differential equations, were not used as formal templates until a mathematical theory of these kinds of equations was developed in the 1950s. To become a template, a form must first be made tractable.

6 Conclusions

Templates travel because they are used for model-building in different domains. They offer a mathematical format whose “qualitative behavior” is well-known. It is often because of this knowledge that a particular template is chosen. But one cannot assume that there are always templates ready for modeling every new phenomenon, and moreover, templates have also been designed at some point. A new phenomenon can exhibit behavior that has not yet been captured by a model, so a lot of creativity has to be put into designing a new template. A critical aspect of this designing process is the choice of the mathematical objects with which one hopes to capture this phenomenon. This means that one has to assess which of the properties of the various kinds of mathematical objects and which combinations of them are most appropriate. The selection of an object is not based on whether it maps directly into the target system, but on whether its properties enable the model to perform its function. Their combination must have a property-profile that meets the functional requirement of the model. This selection of objects is similar to the selection of materials in mechanical engineering. Materials have properties that determine their material index. In mechanical design, a material is chosen as the most appropriate because its material index fits the specific property-profile one is looking for. Likewise, mathematical objects have specific qualities that are necessary for the model to achieve its purpose.

Because templates are ready-made mathematical forms that travel and find application in a particular domain, they do not illuminate the complexity of model design. Templates are specific forms whose applicability is known, described by an “explicit set of conditions.” But once they were designed. It is in the designing phase of any model that one must think carefully about the right combination of the materials to be integrated. The selection of materials is therefore based not only on the material’s index of each, but also on the performance of its various compositions. In addition, as with mechanical design, the selection also depends on local conditions: on the existing (“in-house”) knowledge about the properties of the materials and their combinations, and about the kind of materials available. Systematic procedures or recipes cannot help here, the design must be based on local knowledge.