Monday, November 22, 2021

Von Neumann, Sraffa, and Joint Production

 



This week I’ve decided to work on slightly more obscure concepts within Post-Sraffian economics, particularly on the peculiarities arising within joint production. I don’t plan on going into the negative labor values-positive profits paradox anytime soon, despite my strong opinion about it, but rather about the very criterion of choices of technique within joint product systems. 


For readers familiar with joint production and Sraffa’s system, it’s well known that a great deal of perversities arise, such as the non-existence of the standard commodity, alternating curvature of the wage-rate of profits curve empirically and in theoretical constructions (Soklis 2011), and what this post is intended to be about, disappearing and reappearing “fake” switch points under the standard wage maximization criterion with joint product systems used by Sraffa (1960). What’s very interesting is the wide variety of varying criterions found by people such as Bidard (1990) and Stamatis (1993), which do not run into the problem of fake switch points that may disappear. To my knowledge, the original example provided by Bidard and Klimovsky (2004) seems to suggest that Sraffa’s original methodology when applying his conception of choice of techniques with single product systems to multi-product and joint product systems staggers. Now, I would like to note that before beginning, there is the important consideration of all productive families of joint product systems which do not suffer from these problems, as they’re almost completely approximated by single product systems. However, it is quite a large assumption on my part to disregard the existence of fake switch points on the basis of there existing a family of joint product systems which behave exactly like single product systems. 


To start with, we may go over the schema Sraffa originally used for joint product systems. An integral assumption made by Sraffa, which strongly distinguishes from other Von Neumann type models of joint production (Salvadori 1988) is the idea of equality between production processes and the number of commodities produced. Why this is slightly controversial, especially with the debates over Von Neumann (Marxian) and Sraffian models of joint production is that this ties directly into the construction of examples with negative labor values but positive profits, hence why it is a frequently criticized assumption of Sraffa’s construction. Personally, however, I do not find this to be a very fair criticism. The idea of long run gravitation can simply be added here, as I will demonstrate later, but intuitively it is quite easy to comprehend why Sraffa’s schema of “square” joint product systems carries well into the real world. Assume the case of rectangular joint product systems, i.e., we can make the assumption the number of production processes is greater (lower) than the total amount of commodities in said joint product systems. In real capitalist economies, capitalists do not operate production processes which maintain lower profits compared to other techniques, and hence substitute techniques (often non-linearly, against marginalist intuition). What would be likely, in such a case that we are considering here, is that the system tends to move towards a square product system even in cases of strong joint production, i.e., there is purposeful production of two commodities, rather than the more ecologically focused waste/byproduct conception of joint production that Sraffa may have originally used as an assumption for his brief discussion of joint production. In the case of greater processes than commodities, the process of diffusion across techniques in the economy would lead to either the discoveries of new commodities (The Schumpeterian type answer), or the other more plausible solution of abandoning certain production processes for being frivolous or being sub-profitable compared to other production processes in operation. In the case of less production processes than commodities in circulation in such a joint product system, it’s likely that the process of commodity truncation would ensure gradual gravitation to square systems. However, it may be fair to be critical of Sraffa for assuming real capitalist economies would ever reach such a system, or as Manara seems to have pointed out, the scrapping of certain production processes need not occur. Now, we’ve already justified why the assumption of equality between production processes and commodities is satiated under ideal capitalist conditions, which is largely the goal of Sraffa’s system in my opinion with regards to empirical relevance. We may explain this intuition mathematically, too. Assume that C and Y are square matrices, for reasons outlined above, and presume that their elements may be represented as ci j  and bi j , and obviously (i, j = 1, 2, … , n). The economic meaning of this is essentially that the rows represent commodities and columns, production processes. In the assumption of “square-ness”, we have led to the assumption of an m x m matrix that operates an equivalent number of both commodities and production processes. We may indicate: 






As a vector which represents the prices of individual commodities, and:





The corresponding vector of quantities of labor values used per industry. With these assumptions being made, we can very simply represent Sraffa’s system of joint products as that with congruence of Sraffa’s single product systems: 





Having made the quite simple assumption of commensurability as Leontief input-output matrices between C and Y, we then have an adequate representation of joint product systems in fully adjusted positions, i.e., the assumptions of square-ness of said system. The conditions of viability remain the exact same as that of single product industries under square-assumption, hence why I will not go into the viability conditions of the aforementioned joint product system given above. Now, we may proceed to Bidard’s construction of an algorithmic, set theoretic representation of the choice of techniques. In such a case, I find Bidard’s approach to be the most promising despite the non-existence of an algorithm which converges to the dominant technique sensu stricto, in some cases. Now, Bidard makes the assumption at the start similar to that of Walrasian systems, with that of tâtonnement and the sending out of signals, but this assumption is not “necessary” for Bidard’s construction. Now, we may proceed to his “Julius” type cases in the “Steedman-Schefold-Salvadori” example. We may first assume the golden rule, Von-Neumann assumptions of r = g, an assumption typical of the Von Neumann and Sraffian literature on joint production. Further, the definition Bidard provides is that, given a direction of demand (d) that is necessarily non-negative, d ​​≥ 0, a “Julius technique” is that of which can produce a net production (g) with the given direction of demand d. For the following section, readers are encouraged to refer to Bidard’s actual examples prior to consideration of this brief example. 


We may allow for free disposal, that present within Von Neumann’s original General Equilibrium model, but this does not necessarily fully impact the construction. Free disposal is simply a condition that allows for positivity of prices, and likely with costs in the real world, but despite such, most likely necessary; for operation of processes with negative prices for a given d ( d ≥ 0 ) is technologically infeasible and runs contrary to a behavioral condition we may assume. For expositional ease, we may make the assumption of firms with growth maximizing behavior ( g maximization) a la Galbraith, hence why for satisfaction of above, free disposal is not necessarily an unrealistic assumption. Now, we may proceed. An “n-set” A is a Julius technique iff d  is a non-negative combination of vectors bi (g), and a ∈ A. The dominant Julius technique, again, referring to Bidard’s geometrical examples, is that of which meets d at the northeast convex envelope where points bi lie. For a given direction of demand, similar viability conditions for that given above and repeated multiple times, the agent converges to said dominant technique found. This technique is square, to allow for typical price determination in line with Sraffa, and the market converges to such a technique with a given finite amount of steps, i.e., ‘sensu stricto’. The system given above is square of course, but under the assumption of starting outside of a fully adjusted position between commodities and techniques operated, i.e., m x m, we may admit for truncation (economic rationale given earlier) and said system turns into an (m - y) x (m - y) system. This fits squarely into the Von-Neumann-Sraffa tradition on joint production, with the idea of truncations and input output systems/price and quantity determination, and allows for gradual adjustment to the uniquely determined price system given at the square system. Note that in real economies, despite the lack of cost-less adjustment, this idea remains roughly correct and hence why we may derive the familiar condition of truncated square systems. We may go over another one of Bidard’s examples before moving to the other part of this blog post. 


Bidard also considers the idea of colored techniques, which are distinct from the purely ‘SSS’ (Schefold-Steedman-Salvadori) example given above. The colored techniques that Bidard incorporated in an algorithmic vein is quite similar to the example above, but despite his claim, seems to provide a more general example from the one given above. So, we may start with p processes which may be turned into n disjoint groups (No elements in common), A1, . . . , An







And: 





Using Bidard’s terminology, a ‘Jim’ set is that of which is made up from one ‘process’ from each colored group. This essentially leaves the restrictive conditions given above of gravitational phenomena to a technique which remains indistinguishable to that of the base technique. Despite the mathematical theorem given by Bidard about the convergence to such a technique ‘sensu stricto’, it’s of dubious validity whether or not real agents would converge to a technique that shares nothing in common with that of the base technique in an environment of uncertainty. This bears some semblance to the critique Joan Robinson laid out of Sraffa’s analysis of choice of technique, i.e., seamless transition between techniques that seem to have no stringent relationship with one another presumes an environment without uncertainty or quantity constraints. Given that they have the same technological base or a similar r-viability, it is likely that the non-linear switches across techniques become much more plausible, assuming that realism itself is the largest consideration Sraffians must deal with when venturing out of the theoretical realm. As noted at the start, there usually remains the dependency of joint production upon single production in Sraffian examples of joint production, i.e., perhaps as a control, but there is often consideration of similarities with pure single product industries. In this vein, Bidard’s suggestion describes colored systems in which each technique is defined by its main technique, i.e., we may be in a case of joint production (strong, both outputs are positive and on purpose), but due the composition of demand one product defines the entire system larger than the other product. Presume good x and good z are under comparison, and let us presume they were the spawn of a joint product system that falls under the case of strong joint production, i.e., x ≥ 0 and z ≥ 0. We have ensured a basic viability condition for this extremely rudimentary system, but now we need to discuss the relevance of goods x and z. These goods are not independent of the considerations of directions of demand, and if the direction of demand maintains that the production of x dominates z, then the system is not “too far” from a single product system, characterized by its most important product x. Note that this doesn’t make any presumptions about sheer technological performance atomistically, but the dichotomy of H and F inferiority and the views of final demand largely determined by the relative power of consumers and their wishes (Hosoda 1993).  As a caveat, this does not fall into marginalist thinking, as it only has to do with activity levels rather than choice of technique, made obvious by the assumption of joint production. Further, we may simply lay out Bidard’s own numerical example (Future examples will be made with our own numbers, not in this, however): 

N = 3, P = 6

C1 = (6, -3,2)

C2 = (6, 1, -4)

C3 = (2, 6, -3)

C4 = (-4, 6, 1)

C5 = (-3, 2, 6)

C6 = (1, -4, 6)

As Bidard points out, the (convex) cones generated by (c1, c4, c6), (c2, c4, c5), and (c2, c3, c6) have no common intersection d > 0, hence it’s not of the ‘Jim’ or ‘Julius’ type, made obvious by the disjoint condition. For a detailed proof of the dominant techniques amongst ‘Jim’ type techniques, see Bidard’s paper, for such a proof is outside the scope of this very brief blog post. Given the existence of a common r-net product, this implies that the ‘Jim’ techniques, when specified for their dominant techniques, is theoretically and mathematically certain to be of which agents converge to. However, Bidard points out that this may “not” converge sensu stricto. What’s important in such a case is that with adequate specification of the model, Bidard’s algorithm does lead to convergence and his criterion for the choice of technique remains adequate. The importance of Bidard’s system is that it allows for convergence, yet does not make any specifications about the linearity of the convergence process per se, given by the extra-rule that Bidard specifies near the end of section 3, i.e., returning to the initial technique due to a lack of specification about convergence to the dominant technique leads to the exploration of new techniques. Why this is important, is that it allows for complex movements in the choice of technique (hence, use), while maintaining a definite process towards which re-adoption tends towards. In quite a Schumpeterian vein, this adequately analyzes the influence of an open system with choice about the ability to reach a technique while retaining theoretical determinacy. 


The constructive work Bidard attempts is led by the primary deconstructive work done by him and Klimovsky on the inadequacy of the extension of the single product choice of technique, i.e., wage maximization, into joint product systems that do not fall into families which are theoretically equivalent to single product systems (All-engaging systems). However, it is important to note that their primary deconstructive work is not necessarily correct, in specifying that the reason for why wage maximization needs to be abandoned is due to the persistence of ‘fake switchpoints’. In such a vein, I invoke both Stamatis (1993) and Manoudakis (2010). Bidard and Klimovsky, by technological nature, seem to suggest that these are “fake switchpoints”, due to the price relationships for each technique along a switchpoint. However, Manoudakis points out that these switchpoints can also be considered as real switchpoints that only appear as fake switchpoints due to the normalizations necessary in the assumptions of wage-rate of profit maximization for a given technique. A very basic example of wage maximization for techniques, and the deciding of which technique to adopt in joint product systems, is given through technique v and technique l. For a given rate of profit, a technique is said to be wage maximizing if the largest possible wage is found with this technique, with a given price normalization, that is: 





The given ‘typical commodity’ denoted as s. What’s important is given above, is that the given normalization impacts the criterion for choice of technique, too, as made obvious by the existence of s as the normalization commodity when entering the determination of which technique maximizes a wage at a given rate of profit. We may also demonstrate that normalizations impact the distributive variables, too; important due to the fact that the relative rate of profit directly impacts the choice of technique (maximizing the wage rate at a given rate of profit). The genuine switchpoint can also vary, rather than simply the 2 “fake” switchpoints given by Bidard and Klimovsky, implying that in general the wage maximization criterion does not act as a standard which can unequivocally rank neighboring ‘square’ techniques. As made obvious by the existence of s in the example above, the rankings of techniques is not independent of price normalizations. B&K’s explanation for the persistence of simply 2 fake switchpoints ignores the existence of the normalization commodity on the w-r criterion, i.e., the 1st ‘real’ switchpoint may also vary with the normalization commodity. Similarly, the cost minimization criterion for choices of technique also depend upon the normalization commodity. Directions assumed by the cost and w-r criterions can both be entirely changed by any variation in the normalization commodity, hence, Sraffa’s original criterion remains of shaky validity for heterogeneous agents. 


What’s important, for Bidard’s market algorithm, is the presupposition of normalization by Sraffa’s standard commodity. The unequivocal ranking of techniques remains ‘correct’, but only due to the logical form through which it imposes itself. It has already placed itself in the position to remain valid in the face of changes in the distribution of income due to the fact prices have already been normalized by the Sraffian standard commodity, i.e., any further analysis of the system’s reaction from changes in distribution will be of no avail due to the common composition of the means of production and net output (g). Bidard’s theory must be true a priori by necessity of its presuppositions, but moving outside of any variation in the proportions of g and the means of production (‘Charasoffian standard system’), implies that his system is also subject to similar changes as the cost minimization criterion. Bidard’s algorithm also is reminiscent of the cost minimization criterion, when mathematically represented as: 





In obvious notation. Hence, the system operates similar to that of the cost minimization criterion but, normalized by the Sraffian standard commodity, it is immune to changes in normalization. So, perhaps theoretically correct, Bidard’s algorithm lacks use outside of highly specified cases. Manoudakis and Stamatis both seem to suggest the Von-Neumann criterion as an adequate representation of Charasoffian systems invariant to changes in normalization with joint products, but I find myself disagreeing on the dismissal on their part of Bidard’s algorithm. The Von-Neumann criterion used by Bidard for the SSS case keyly does assume similar conditions to that of the typical approach used by Stamatis, but provides a set theoretic algorithm for finding the dominant technique. I remain adamant in insisting that Bidard’s system remains equivalent to Von Neumann’s system used by Stamatis and Manoudakis, hence the dismissal on their part of Bidard seems too heavy handed for it to be valid, without critiquing their own approaches to joint production. Bidard provides an adequate proof for theorem 3, the existence of a dominant technique for the SSS Von-Neumann type systems. We may replicate this proof using our notation, for the viewer’s ease: 

Assume (t1, … , tj, … , tn) and (t’1, … , t’j, … , t’n) as the dominant techniques for given prices p and p’, and assume that p and p’ are not equal to one another. Assuming the cost minimization property, we result with: 







Similar to Von-Neumann, we presume inequalities rather than equalities of Sraffa’s joint product schema (square-ness). From a positive common d






Therefore we have, 






Using the cost minimization property, p = p’


In this sense is has been proven that the dominant technique is unique, and there exists an algorithm that does not converge sensu stricto, but one that does eventually converge after recalculation and working through new avenues, reaching the dominant technique (The reader is advised to see Bidard’s example 4 for Jim techniques, for a graphical example of this). Working through the Von-Neumann framework, then, the assertion that Bidard’s criterion is only valid under certain conditions, yet the Von-Neumann framework is consistently valid as an extension of the Charasoffian system, seems contradictory. Hence, I believe that Bidard still provides an adequate foundation for the choice of technique when working through the SSS Von-Neumann type examples


This week’s post was a slight bit more esoteric, but I found it interesting enough to discuss on this blog. I apologize for not updating in a while, but I was constrained by studies for midterms for the last month or so.


References:

Soklis, G. (2011), SHAPE OF WAGE–PROFIT CURVES IN JOINT PRODUCTION SYSTEMS: EVIDENCE FROM THE SUPPLY AND USE TABLES OF THE FINNISH ECONOMY. Metroeconomica, 62: 548-560. https://doi.org/10.1111/j.1467-999X.2011.04125.x

Bidard, C. (1990). An Algorithmic Theory of the Choice of Techniques. Econometrica, 58(4), 839–859. https://doi.org/10.2307/2938352

Stamatis, G. (1993). The Impossibility of a Comparison of Techniques and of the Ascertainment of a Reswitching Phenomenon. 211(5-6), 426-446. https://doi.org/10.1515/jbnst-1993-5-607

Manoudakis, Kosmas, 2009. "Fake switch points," MPRA Paper 26109, University Library of Munich, Germany.

Bidard, C., & Klimovsky, E. (2004). Switches and fake switches in methods of production. Cambridge Journal of Economics, 28(1), 89–97. http://www.jstor.org/stable/23602175

Salvadori, N. (1988). Fixed Capital within a Von Neumann-Morishima Model of Growth and Distribution. International Economic Review, 29(2), 341–351. https://doi.org/10.2307/2526670

Manara, C.F. (1968), Il modello di Sraffa per la produzione congiunta di merci a mezzo di merci, L’Industria, 1: 3-18.

Hosoda, E.B. (1993). NEGATIVE SURPLUS VALUE AND INFERIOR PROCESSES. Metroeconomica, 44, 29-42.

Manoudakis, Kosmas, 2010. "Choosing techniques or typical subsystems instead? A PhD thesis," MPRA Paper 26178, University Library of Munich, Germany.




Saturday, October 16, 2021

A Position On the Philosophy of Mathematics

A Slightly Economics Related Post


Recently this blog has been considered with some of the more practical considerations of growth and proper representatives models, along with the theory of distribution. I intend to continue with this, but for now, I believe it is worth considering a meta-theoretical note on formal mathematical theories in economics. 

The philosophy of mathematics is quite an extensive subject with varying interpretations, just as economics remains. Economics has concerned itself with the much more formalistic aspects of modeling for a great deal of time, particularly with the proper methodology of proofs for the existence of a mathematical phenomena and counter-examples to conjectures, e.g., the marginal productivity theory of remuneration of factors of production and the counter-example reswitching posited itself as. The reason for the incorporation of such topics into our economic understanding is due to the fact that economics has blurred the lines with pure mathematics as time has gone on, with the counter-revolutions starting in the 1960s in terms of the “Rational Expectations Hypothesis” and, e.g., the minor revolution led by Piero Sraffa at Cambridge in the 1960s. All of these have established a necessity for a meta-theory of economical proofs. The importance of a meta-theory for mathematics, and by the transitive property, economics, should not be understated in any attempt to further discourse on the formal aspects of economics. 

Mathematics itself is riddled with peculiarities, since the older Greco-Roman branch of mathematics established itself as an authority on the nature of mathematical objects as a real, but abstract, object. The existence of such fundamental objects in mathematics, e.g., the constant of π (Pi), are treated as real, but abstract objects that directly impact the functionality of the world. This position is generally called “Platonism” and is held as one of the oldest positions on the philosophy of mathematics. I do not find this view to be particularly useful with regards to the ability to justify economic proofs, if we treat mathematical objects as real bodies then the very obvious question arises of how we are to acquire knowledge of said objects. As mentioned above, these objects are held as abstracts, and hence the self evidence of these objects is not certain, which leads to dissent over the philosophy of certain mathematical objects. The question arises, however, of how individuals are able to recognize these objects for themselves. On a theoretical note, how do we acquire knowledge of these abstract objects? All justifications of such an abstract argument imply an element of mathematical anti-realism, the use of constructive intuition or by knowledge of empirical facts. The problem with such a position is, however, when introducing such individual dependent concepts into mathematics, the ability to verify the “independence” of Platonic objects becomes dubious, there is no premise to then state that mathematical objects exist independent of the observer.

    On the opposite end of such a spectrum is the concept of “formalism”; or that mathematics is fundamentally a game that evolves with societally given rules, and is nothing but the manipulation of alpha-numerical strings according to such a rule. This approach, pioneered by David Hilbert, essentially treated mathematics as a concept completely dependent on society and its own evolution. Assume, for the sake of argument, that as time evolves we forget everything we knew about mathematics, due to some catastrophe. Further, assume that we change a great deal in terms of our biological structure, so that our ability to manipulate said alpha-numerical strings changes. In such a situation, a formalist would hold that mathematics itself remains without any similarities to its older version prior to such a destructive, Schumpeter styled, evolution. Hilbert sought to reconstruct mathematics using his conception of mathematical objects as game-styled manipulation, using an axiomatic method. The problem with such an approach, however, was that its validity was dependent upon the validity of the all-encompassing nature of mathematics, that the axiomatic method was truly complete. One of the most notorious discoveries in the philosophy of mathematics was Gödel’s incompleteness theorems, stating that any system that was true was mathematically unprovable. I will refrain from discussing this, leaving it to a future post on the implications for economics, but in essence Gödel’s incompleteness theorem held that a true mathematical proposition, using the axiomatic method, cannot be proved within its own system, implying a fundamental element of uncertainty within our mathematical computational abilities. 

The two other positions on the philosophy of mathematics that I will briefly discuss is Logicism and Intuitionism/Constructivism. Logicism essentially purports itself as a theory that describes the fundamental nature of mathematics through logic. All mathematical statements are reducible to logical, true or false statements, and in that situation mathematics is a branch of logic. Such a theory sits well with most individuals, it explains to a satisfactory degree how we acquire knowledge of mathematics, and we find it almost completely obvious that logic is the fundamental piece of branches of mathematics such as geometry. Henri Poincaré (​​1902) among others, however, ruthlessly criticized logicism as a theory that adds little to our pure mathematical knowledge. Reducing everything to a few true or false statements essentially means math itself is tautologous, and that any acquisition of any further mathematical knowledge is simply equivalent to the knowledge known before. On a more theoretical point, Poincaré held that mathematics is the application of the pure intuition in a construction of the intuitive continuum, arithmetical and topological. Our knowledge of geometry comes from our pure intuition, our ability to construct mathematical objects using the pure intuition of time and space. The pure intuition of space manifests itself into geometry, we “construct” geometrical objects using our ability to form a topological continuum and analyze the intuitive relationships between lines, points, rays, and segments in Euclidean geometry1. The pure intuition of time acts as our ability to create a numerical continuum of real numbers, irrationals, and imaginary numbers. Poincaré took an essentially Kantian view on mathematics when criticizing the views of Dedekind, Russell, and Frege on the foundations of mathematics, utilizing a similar conception of the intuitive “synthetic a priori” and transcendental idealism. Our mathematical knowledge was all constructed through our use of intuition, and deductive proofs which may be reducible to true or false statements lack the ability to expand our knowledge of mathematics. Take the example of 1 + 1 = 2; Kant argued that such a statement was not true by its premise, it was not knowable a priori without intuition. The mind constructs a numerical continuum using the intuition and constructively finds the final result, the truth of 1+1 is not deductively reducible to a logical a priori statement due to the inability to extend one’s mind and intuitively construct and arithmetical continuum of numbers. The Brouwerian continuum hypothesis (Poincaré being a precursor to an intuitionistic-conventionalistic philosophy of mathematics) can formally be framed in terms of: 

Assume two predicates, A(α, x), α ranging over choice sequences (The intuitive idea of a set, a constructible set in essence, see Troelstra (1983)) and x over naturals. We can define extensionality as: 



Hence, the weak continuity principle is: 


    Where α and β range over choice sequences, m and x over naturals, and α’m being the initial segment of length m. This holds that we can assign a number to every choice sequence, using a tree-like conception of number-generators and free-choice sequences, given the initial segment of such a sequence, α. The importance of such given above implies that we can provide geometrical and arithmetical knowledge of objects using the continuum hypothesis, but recasting it under intuitionistic terms, as the non-constructive proof given by classical mathematics remains unacceptable. Providing a continuum allows for construction of mathematical objects using the mind, i.e., mathematical objects are mind dependent, and hence, constructions. This implies that the use of the intuition is necessary in mathematics, and hence, mathematics is not reducible to pure logico-mathematical statements, meaningless alpha-numerical strings, or abstract mind-independent objects. 

The proper interpretation of a meta-theory of proofs, seems to suggest in the direction of intuitionistic constructivism (To be distinguished from non-intuitionistic constructivism). In this line of thought, to prove an object is not to utilize the principle of the excluded middle, ¬(p∧¬p), cannot hold as a theory of proofs. The use of algorithmic constructive axiomatizations is the only valid manner to prove mathematical-economic propositions, formally. One peculiarity is the parallels to be found with Sraffa’s constructive, algorithmic method, and the BHK (Brouwer–Heyting–Kolmogorov) interpretation of constructive proofs. The manner of which Sraffa’s standard system is constructed is through a constructivist methodology of repeated algorithm like methods, along with the proof of the uniqueness of the standard system (Miyao 1977).  Such parallels, in terms of the constructive existence proof go quite deep, seemingly implying a conscious choice on the part of Sraffa’s proof style. This interpretation seems to be suggested by Velupillai (2008), and it is interesting to note that Sraffa’s time as an academic friend to Ludwig Wittgenstein in Cambridge, an (in)famous philosopher with a constructivist philosophy of mathematics (Rodych 2013) may provide itself as a link to Sraffa’s peculiar proof style. 

The purpose of this brief blogpost was to explore on a more meta-theoretical note, the foundations of mathematical-economics and the proper methodology for proofs in the aforementioned, along with a note on the mathematical foundations for Sraffa’s Production of Commodities by the Means of Commodities. I plan on exploring this more in depth later, but due to an unexpected sickness, I seem to lack the energy to write a sufficiently long blog-post. Old disclaimer applies with regards to equations, I apologize to have to screenshot equations from Google-Docs; but Blogger does not input it correctly otherwise.

Footnote:

  1. Some hold that the intuitionistic methodology is incompatible with Non-Euclidean theories of geometry, found present in Einstein's Riemann Geometry based approach. Poincaré criticized this view through a conventionalist lense of geometry and the sciences, for further discussion with regards to the intuition of space and time and Non-Euclidean geometry, read Arledge (2014). 

References: 

Rodych, Victor. "Mathematical Sense: Wittgenstein’s Syntactical Structuralism". Wittgenstein and the Philosophy of Information: Proceedings of the 30th International Ludwig Wittgenstein-Symposium in Kirchberg, 2007, edited by Alois Pichler and Herbert Hrachovec, Berlin, Boston: De Gruyter, 2013, pp. 81-104. https://doi.org/10.1515/9783110328462.81

Velupillai, Kumaraswamy. (2008). Sraffa’s Economics in Non-Classical Mathematical Modes. 10.1057/9780230375338_15.

Miyao, Takahiro. “A Generalization of Sraffa’s Standard Commodity and Its Complete Characterization.” International Economic Review, vol. 18, no. 1, [Economics Department of the University of Pennsylvania, Wiley, Institute of Social and Economic Research, Osaka University], 1977, pp. 151–62, https://doi.org/10.2307/2525774.

Troelstra, A. S. “Analysing Choice Sequences.” Journal of Philosophical Logic 12, no. 2 (1983): 197–260. http://www.jstor.org/stable/30226270.

Poincaré, Henri, 1854-1912. Science and hypothesis. London, New York, Scott, 1905 (OCoLC)622773044

Arledge, Chris. (2014). Kant, non-Euclidean Geometry and A Priori Knowledge: A Reassessment.