# Another look at the Gardner problem

Abstract

In this paper we revisit one of the classical perceptron problems from the neural networks and statistical physics. In [10] Gardner presented a neat statistical physics type of approach for analyzing what is now typically referred to as the Gardner problem. The problem amounts to discovering a statistical behavior of a spherical perceptron. Among various quantities [10] determined the so-called storage capacity of the corresponding neural network and analyzed its deviations as various perceptron parameters change. In a more recent work [17, 18] many of the findings of [10] (obtained on the grounds of the statistical mechanics replica approach) were proven to be mathematically correct. In this paper, we take another look at the Gardner problem and provide a simple alternative framework for its analysis. As a result we reprove many of now known facts and rigorously reestablish a few other results.

Index Terms: Gardner problem; storage capacity.

## 1 Introduction

In this paper we will revisit a classical perceptron type of problem from neural networks and statistical physics/mechanics. A great deal of the problem’s popularity has its roots in a nice work [10]. Hence, to describe the problem we will closely follow what was done in [10]. We start with the following dynamics:

(1) |

Following [10] for any fixed we will call each , the icing spin, i.e. . Following [10] further we will call , the interaction strength for the bond from site to site . , will be the threshold for site in pattern (we will typically assume that ; however all the results we present below can be modified easily so that they include scenarios where ).

Now, the dynamics presented in (1) works by moving from a to and so on (of course one assumes an initial configuration for say ). Moreover, the above dynamics will have a fixed point if say there are strengths , such that for any

(2) | |||||

Now, of course this is a well known property of a very general class of dynamics. In other words, unless one specifies the interaction strengths the generality of the problem essentially makes it easy. In [10] then proceeded and considered the spherical restrictions on . To be more specific the restrictions considered in [10] amount to the following constraints

(3) |

Then the fundamental question that was considered in [10] is the so-called storage capacity of the above dynamics or alternatively a neural network that it would represent. Namely, one then asks how many patterns (-th pattern being ) one can store so that there is an assurance that they are stored in a stable way. Moreover, since having patterns being fixed points of the above introduced dynamics is not enough to insure having a finite basin of attraction one often may impose a bit stronger threshold condition

(4) | |||||

where typically is a positive number.

In [10] a replica type of approach was designed and based on it a characterization of the storage capacity was presented. Before showing what exactly such a characterization looks like we will first formally define it. Namely, throughout the paper we will assume the so-called linear regime, i.e. we will consider the so-called *linear* scenario where the length and the number of different patterns, and , respectively are large but proportional to each other. Moreover, we will denote the proportionality ratio by (where obviously is a constant independent of ) and will set

(5) |

Now, assuming that , are i.i.d. symmetric Bernoulli random variables, [10] using the replica approach gave the following estimate for so that (4) holds with overwhelming probability (under overwhelming probability we will in this paper assume a probability that is no more than a number exponentially decaying in away from )

(6) |

Based on the above characterization one then has that achieves its maximum over positive ’s as . One in fact easily then has

(7) |

The result given in (7) is of course well known and has been rigorously established either as a pure mathematical fact or even in the context of neural networks and pattern recognition [16, 9, 15, 27, 26, 7, 13, 6, 25]. In a more recent work [17, 18] the authors also considered the Gardner problem and established that (6) also holds.

Of course, a whole lot more is known about the model (or its different variations) that we described above and will study here. All of our results will of course easily translate to these various scenarios. Instead of mentioning all of these applications here we in this introductory paper chose to present the key components of our mechanism on the most basic (and probably most widely known) problem. All other applications we will present in several forthcoming papers.

Also, we should mentioned that many variants of the model that we study here are possible from a purely mathematical perspective. However, many of them have found applications in various other fields as well. For example, a great set of references that contains a collection of results related to various aspects of neural networks and their bio-applications is [2, 1, 4, 5, 3].

As mentioned above, in this paper we will take another look at the above described storage capacity problem. We will provide a relatively simple alternative framework to characterize it. However, before proceeding further with the presentation we will just briefly sketch how the rest of the paper will be organized. In Section 2 we will present the main ideas behind the mechanism that we will use to study the storage capacity problem. This will be done in the so-called uncorrelated case, i.e. when no correlations are assumed among patterns. In the last part of Section 2, namely, Subsection 2.3 we will then present a few results related to a bit harder version of a mathematical problem arising in the analysis of the storage capacity. Namely, we will consider validity of fixed point inequalities (2) when . In Section 3 we will then show the corresponding results when the patterns are correlated in a certain way. Finally, in Section 4 we will provide a few concluding remarks.

## 2 Uncorrelated Gardner problem

In this section we look at the so-called uncorrelated case of the above described Gardner problem. In fact, such a case is precisely what we described in the previous section. Namely, we will assume that all patterns , are uncorrelated ( stands for vector ). Now, to insure that we have the targeted problem stated clearly we restate it again. Let and assume that is an matrix with i.i.d. Bernoulli entries. Then the question of interest is: assuming that , how large can be so that the following system of linear inequalities is satisfied with overwhelming probability

(8) |

This of course is the same as if one asks how large can be so that the following optimization problem is feasible with overwhelming probability

(9) |

To see that (8) and (9) indeed match the above described fixed point condition it is enough to observe that due to statistical symmetry one can assume . Also the constraints essentially decouple over the columns of (so one can then think of in (8) and (9) as one of the columns of ). Moreover, the dimension of in (8) and (9) should be changed to ; however, since we will consider a large scenario to make writing easier we keep dimension as .

Now, it is rather clear but we do mention that the overwhelming probability statement is taken with respect to the randomness of . To analyze the feasibility of (9) we will rely on a mechanism we recently developed for studying various optimization problems in [23]. Such a mechanism works for various types of randomness. However, the easiest way to present it is assuming that the underlying randomness is standard normal. So to fit the feasibility of (9) into the framework of [23] we will need matrix to be comprised of i.i.d. standard normals. We will hence without loss of generality in the remainder of this section assume that elements of matrix are indeed i.i.d. standard normals (towards the end of the paper we will briefly mention why such an assumption changes nothing in the validity of the results; also, more on this topic can be found in e.g. [19, 20, 23] where we discussed it a bit further).

Now, going back to problem (9), we first recognize that it can be rewritten as the following optimization problem

subject to | (10) | ||||

where is an -dimensional column vector of all ’s. Clearly, if then (9) is feasible. On the other hand, if then (9) is not feasible. That basically means that if we can probabilistically characterize the sign of then we could have a way of determining such that . Below, we provide a way that can be used to characterize . We do so by relying on the strategy developed in [23, 22] and ultimately on the following set of results from [11, 12].

###### Theorem 1.

The following, more simpler, version of the above theorem relates to the expected values.

###### Theorem 2.

We will split the rest of the presentation in this section into two subsections. First we will provide a mechanism that can be used to characterize a lower bound on . After that we will provide its a counterpart that can be used to characterize an upper bound on a quantity similar to which has the same sign as .

### 2.1 Lower-bounding

We will make use of Theorem 1 through the following lemma (the lemma is of course a direct consequence of Theorem 1 and in fact is fairly similar to Lemma 3.1 in [12], see also [19] for similar considerations).

###### Lemma 1.

Let be an matrix with i.i.d. standard normal components. Let and be and vectors, respectively, with i.i.d. standard normal components. Also, let be a standard normal random variable and let be a function of . Then

(11) |

###### Proof.

Let with being an arbitrarily small constant independent of . We will first look at the right-hand side of the inequality in (11). The following is then the probability of interest

(12) |

After solving the minimization over and the maximization over one obtains

(13) |

where is vector with negative components replaced by zeros. Since is a vector of i.i.d. standard normal variables it is rather trivial that

(14) |

where is an arbitrarily small constant and is a constant dependent on but independent of . Along the same lines, since is a vector of i.i.d. standard normal variables it easily follows that

(15) |

where

(16) |

One then easily also has

(17) |

where is an arbitrarily small constant and analogously as above is a constant dependent on and but independent of . Then a combination of (13), (14), and (17) gives

(18) |

If

(19) | |||||

one then has from (18)

(20) |

We will now look at the left-hand side of the inequality in (11). The following is then the probability of interest

(21) |

Since (where is, as all other ’s in this paper are, independent of ) from (21) we have

(22) |

When is large from (22) we then have

(23) |

Assuming that (19) holds, then a combination of (11), (20), and (23) gives

(24) |

We summarize our results from this subsection in the following lemma.

###### Lemma 2.

Let be an matrix with i.i.d. standard normal components. Let be large and let , where is a constant independent of . Let be as in (30) and let be a scalar constant independent of . Let all ’s be arbitrarily small constants independent of . Further, let be a standard normal random variable and set

(25) |

Let be a scalar such that

(26) |

Then

(27) |

In a more informal language (essentially ignoring all technicalities and ’s) one has that as long as

(28) |

the problem in (9) will be infeasible with overwhelming probability.

### 2.2 Upper-bounding (the sign of)

In the previous subsection we designed a lower bound on which then helped us determine an upper bound on the critical storage capacity (essentially the one given in (28)). In this subsection we will provide a mechanism that can be used to upper bound a quantity similar to (which will maintain the sign of ). Such an upper bound then can be used to obtain a lower bound on the critical storage capacity . As mentioned above, we start by looking at a quantity very similar to . First, we recognize that when one can alternatively rewrite the feasibility problem from (9) in the following way

(29) |

For our needs in this subsection, the feasibility problem in (29) can be formulated as the following optimization problem

subject to | (30) | ||||

For (29) to be infeasible one has to have . Using duality one has

subject to | (31) | ||||

and alternatively

subject to | (32) | ||||

We will now proceed in a fashion similar to the on presented in the previous subsection. We will make use of the following lemma (the lemma is fairly similar to Lemma 11 and of course fairly similar to Lemma 3.1 in [12]; see also [19] for similar considerations).

###### Lemma 3.

Let be an matrix with i.i.d. standard normal components. Let and be and vectors, respectively, with i.i.d. standard normal components. Also, let be a standard normal random variable and let be a function of . Then

(33) |

###### Proof.

The discussion related to the proof of Lemma 11 applies here as well. ∎

Let with being an arbitrarily small constant independent of . We will follow the strategy of the previous subsection and start by first looking at the right-hand side of the inequality in (33). The following is then the probability of interest

(34) |

where for the easiness of writing we removed possibility (also, such a case contributes in no way to the possibility that . After solving the minimization over and the maximization over one obtains

(35) |

Now, we will for a moment assume that and are such that

(36) |

That would also imply that

(37) |

What is then left to be done is to determine an such that (36) holds. One then easily has

(38) |

where similarly to what we had in the previous subsection is vector with positive components replaced by zeros. Since is a vector of i.i.d. standard normal variables it is rather trivial that

(39) |

where is an arbitrarily small constant and is a constant dependent on but independent of . Along the same lines, since is a vector of i.i.d. standard normal variables it easily follows that

(40) |

where we recall

(41) |

One then easily also has

(42) |

where we recall that is an arbitrarily small constant and is a constant dependent on and but independent of . Then a combination of (38), (39), and (42) gives

(43) |

If

(44) | |||||

one then has from (43)

(45) |

A combination of (35), (36), (37), and (45) gives that if (44) holds then

(46) |

We will now look at the left-hand side of the inequality in (33). The following is then the probability of interest

(47) |

Since (where is, as all other ’s in this paper are, independent of ) from (47) we have

(48) |

When is large from (48) we then have

(49) |

Assuming that (44) holds, then a combination of (32), (33), (46), and (49) gives

(50) | |||||

From (50) one then has

(51) |

which implies that (29) is feasible with overwhelming probability if (44) holds.

We summarize our results from this subsection in the following lemma.

###### Lemma 4.

Let be an matrix with i.i.d. standard normal components. Let be large and let , where is a constant independent of . Let be as in (30) and let be a scalar constant independent of . Let all ’s be arbitrarily small constants independent of . Further, let be a standard normal random variable and set

(52) |

Let be such that

(53) |

Then

(54) |

Moreover,

(55) |

###### Proof.

Follows from the above discussion. ∎

Similarly to what was done in the previous subsection, one can again be a bit more informal and ignore all technicalities and ’s. After doing so one has that as long as

(56) |

the problem in (9) will be feasible with overwhelming probability. Moreover, combining results of Lemmas 27 and 55 one obtains (of course in an informal language) for the storage capacity

(57) |

The value obtained for the storage capacity in (57) matches the one obtained in [10] while utilizing the replica approach. In [17, 18] as well as in [24] the above was then rigorously established as the storage capacity. In fact a bit more is shown in [17, 18, 24]. Namely, the authors considered a partition function type of quantity (i.e. a free energy type of quantity) and determined its behavior in the entire temperature regime. The storage capacity is essentially obtained based on the ground state (zero-temperature) behavior of such a free energy.

### 2.3 Negative

In [24], Talagrand raised the question related to the behavior of the spherical perceptron when . Along the same lines, in [24], Conjecture 8.4.4 was formulated where it was stated that if then the problem in (9) is infeasible with overwhelming probability. The fact that was never really used in our derivations in Subsection 2.1. In other words, the entire derivation presented in Subsection 2.1 will hold even if . The results of Lemma 11 then imply that for any if

(58) |

then the problem in (9) is infeasible with overwhelming probability. This resolves Talagrand’s conjecture 8.4.4 from [24] in positive. Along the same lines, it partially answers the question (problem) 8.4.2 from [24] as well.

## 3 Correlated Gardner problem

What we considered in the previous section is the standard Gardner problem or the standard spherical perceptron. Such a perceptron assume that all patterns (essentially rows of ) are uncorrelated. In [10] a correlated version of the problem was considered as well. The, following, relatively simple, type of correlations was analyzed: instead of assuming that al elements of are i.i.d. symmetric Bernoulli random variables, one can assume that each is an asymmetric Bernoulli random variable. To be a bit more precise, the following type of asymmetry was considered:

(59) |

In other words, each was assumed to take value with probability and value with probability . Clearly, . If one has fully uncorrelated scenario (essentially, precisely the scenario considered in Section 2. On the other hand, if one has fully correlated scenario where all patterns are basically equal to each other. Of course, one then wonders in what way the above introduced correlations impact the value of the storage capacity. The first observation is that as the correlation grow, i.e. as grows, one expects that the storage capacity should grow as well. Such a prediction was indeed confirmed through the analysis conducted in [10]. In fact, not only that such an expectations was confirmed, actually the exact changed in the storage capacity was quantified as well. In this section we will present a mathematically rigorous approach that will confirm the predictions given in [10].

We start by recalling how the problems in (8) and (9) transform when the patterns are correlated. Essentially instead of (8) one then looks at the following question: assuming that , how large can be so that the following system of linear inequalities is satisfied with overwhelming probability

(60) |

where is the first column of and are all columns of except column . Also, is a diagonal matrix with elements on the main diagonal being the elements of . This of course is the same as if one asks how large can be so that the following optimization problem is feasible with overwhelming probability

(61) |

where elements of and are i.i.d. asymmetric Bernoulli distributed according to (59). Also, the size of in (61) should be . However, as in the previous section to make writing easier we will view it as an