World's most popular travel blog for travel bloggers.
Problem Detail:

Let $A$ be a sorted array. Suppose that this array is huge and can only be stored in a disk. In the I/O model and algorithms built in this model, every time we access the disk we spend an I/O and we get $B$ elements to the RAM. Since accessing the disk is much more expensive than accessing the main memory, we assume that what ever operations we perform in main memory, they come for free. So even if you scan 1 million times the entire RAM, in the I/O model, this would cost $O(1)$ I/Os, and that's the only thing we care about.

Suppose that you now receive queries in the form $[l,r, key]$ where $l$ and $r$ are indices which define a continguous chunk of the array $A$, and $key$ is a value that you want to search inside this chunk. So you want to know whether the key is somewhere in that chunk or not.

If you expect many queries to come, and we will make this assumption as well, it might be worth spending $O(\frac{N}{B})$ I/Os to build a B tree on top of the array $A$, and use the $B$ tree for your searches. Since the height of a B tree is $O(\log_{B}(N))$ each search will require $O(\log_{B}(N))$ I/Os.

However I was wondering, if the size of the chunk defined by the query is $n$, is it possible to use the same B tree to achieve $O(\log_{B}(n))$ I/Os? Because if we always start searching from the root of the B tree, we would first have to find where our chunk is, and then search inside the chunk, which is why the total number of I/Os would be $O(\log_{B}(N))$.

My idea would be to spend 1 I/O to access the leaf block in the B tree containing element $A[l]$, 1 I/O to access the leaf block in the B tree containing element $A[r]$, and then spend $O(\log_{B}(n))$ I/Os to find the least common ancestor block in the B tree of these two leaf blocks. Then since we found the least common ancestor block in the B tree, we can start searching from there, spending in total $2 + O(\log_{B}(n)) + \log_{B}(n)$ I/Os.

Unfortunately I am very novice in the I/O model and this approach sounds too good to be correct. Is it really that easy for this problem or am I missing something?

I assume the "B-tree" in your post is in fact a "B+-tree" and array $\small A$ is stored in an increasing manner. Your idea is almost good and needs some small revision.

• You need to test whether $\small A[l] \leq key$ and $\small A[r] \geq key$. If $\small A[l] = key$ or $\small A[r] = key$, it is done.

• If it is found that $\small A[l] < key$ and $\small A[r] > key$, find the LCA of the blocks containing $\small A[l]$ and $\small A[r]$, as you have done. The height of the LCA is of $\small \mathcal{O}(\log_B(n))$.

• If $\small A[l] > key$ or $\small A[r] < key$, then obviously $\small key$ does not exist in $\small [l, r]$.

Why testing $\small A[l] \leq key$ and $\small A[r] \geq key$ is important?

The LCA block may contain entries that direct to blocks outside $\small [l, r]$. If you start from the LCA, you may find a $\small key$ even if $\small key$ does not exist in $\small [l, r]$.

Question Source : http://cs.stackexchange.com/questions/64296

3200 people like this
Problem Detail:

Given a multiset $X=\{x_1,\dots,x_n\}$ where every element $w_i$ is a power of two, and given an integer $M$, I'd like to determine if there is any subset of $X$ that sums to $M$.

(This question is based on a previous question about the subset problem where every number in the set is a multiple of two, but this time $X$ consists only of powers of two.)

Can we find an efficient algorithm for this problem? Could we make it any better than standard algorithms for subset sum or knapsack?

###### Answered By : Aleksi Torhamo

First, look at the bits that are set in $M$ and write it as $M = m_1+m_2+\dots$ where each $m_i$ is a power of two and such that $m_1$ is the smallest power of two, and so on.

If the numbers $x_i \in X$ are distinct powers of two and you can use each one zero or one times, the problem is trivial: If all $m_i$ are found in $X$, then $M$ can be formed by them, otherwise it can't. This is because missing powers of two can't be formed by a) any combination of larger powers of two or b) distinct smaller powers of two.

If the numbers $x_i$ aren't guaranteed to be distinct, it's a bit more complex. Missing powers of two still can't be formed by larger powers of two, but they can be formed by non-distinct smaller powers of two.

Let's look at how smaller powers of two can form larger powers of two. If you have powers of two $A = [a_1, \dots, a_n]$ that are smaller than a power of two $N$, and $S=\sum_{i=0}^n a_i$, then $N$ can be formed by them iff $S \ge N$. Let $b$ be the smallest power of two in $A$ and $c$ its number of occurences in $A$. If $c$ is odd, one $a_i=b$ can't combine with anything to form a higher power of two and the corresponding bit will be set in $S$. The others can be replaced with $\lfloor \frac{c}{2} \rfloor$ copies of the number $2b$ in $A$. If we continue this with higher powers of two until we get to $N$, we find that we can form $\lfloor \frac{S}{N} \rfloor$ copies of $N$ with $A$.

So, start with $m_1$. If it's found in $X$, we remove it and move to the next power of two. If it's not, we calculate the sum $S$ of all $x_i$ smaller than $m_1$ and remove them from $X$. If $S \lt N$, then $M$ can't be formed by $X$. Otherwise, we add $\lfloor \frac{S}{m_1} \rfloor - 1$ copies of $m_1$ to $X$ and continue in the same manner with the next smallest power of two in $M$.

Question Source : http://cs.stackexchange.com/questions/66136

3200 people like this
Problem Detail:

How many different binary search trees are possible that store the values 1,2,...,n ?

So far I found a recursive formula for the number (by case distinction what's at the root):

$T(n) = 2T(n-1) + \sum_{i=2}^{n-1}T(i-1)T(n-i), n > 1$ and $T(1) = 1$

But I have no idea how to solve this recursion. Our task was only to find the recursion and I believe this to be a correct solution. But I am very interested in a closed formula of it. Can anyone link me to some resources/books or give a general hint on how it can be solved?

###### Answered By : Yuval Filmus

The solution to your recurrence is $$T(n) = \frac{(2n)!}{n!(n+1)!},$$ also known as the Catalan numbers. The quickest way to find this is by computing a few elements of the sequence and using the OEIS to identify the sequence.

Question Source : http://cs.stackexchange.com/questions/66617

3200 people like this
Problem Detail:

I understand that the Backus-Naur Form is utilized to assess and specify type 2 grammar, but is it in itself a form of type 2 grammar?

###### Answered By : André Souza Lemos

Backus-Naur Form is a conventional notation for writing Context-Free Grammars.

Its usefulness comes from the need to distinguish the character encoding standards that are used to write texts in various computer languages from the higher-order alphabets of terminal and non-terminal symbols (corresponding to lexical units and syntactical variables, respectively) used in the definition of grammars.

The very idea of using CFGs to define the syntax of high-level programming languages was remarkably precocius, which is another reason why the work of John Backus and Peter Naur was so appreciated at the time.

Question Source : http://cs.stackexchange.com/questions/66686

3200 people like this
Problem Detail:

A (non-deterministic) finite state automaton for "all strings that contain substring $w$" is very simple. Just make the path for $w$ and add looped transitions for all letters at the first initial state and the last final state.

When asked to make a deterministic automaton in general we might expect that the automaton can be of exponential size (see the warning of Yuval in the case where $w=101$: Write a regex to match string that does NOT contain a certain pattern).

However, Knuth-Morris-Pratt tells us that a pattern match automaton for $w$ is of linear size, and can be constructed in linear time. That means that the deterministic automaton for "contains substring $w$" is of linear size too (failure links can be replaced by transitions, on for each mismatching letter). Luke has used KMP to provide an automaton for the example "not ABCDABD" in his answer Automaton for substring matching.

My question is the following: will the standard determinization of a "substring $w$" automaton always lead to a linear sized automaton? Here I mean the practical version of the construction, where unreachable states are not constructed (so not the full $2^Q$ subset construction). Is the answer direct from the construction, or do we need KMP?

A little observation. In the construction there might be many states that contain the original final state. As they all will accept and also all transitions for ever will lead to accepting states (once we have seen $w$) these states can be grouped into one single state. I do not know whether that is a necessary trick to avoid exponential explosion, see http://cs.stackexchange.com/a/11842/4287

###### Answered By : Yuval Filmus

Consider the following NFA: the states are $q_0,\ldots,q_{|w|}$, there are transitions $q_{i-1} \to q_i$ on $w_i$, there are self-loops on $q_0,q_{|w|}$ (for all alphabet symbols), the initial state is $q_0$, and the unique final state is $q_{|w|}$.

Assuming $w \neq \epsilon$, the determinization generates $2|w|$ states:

• For each location $0 \leq i < |w|$, a state consisting of $q_0$, $q_i$, and $q_j$ for $j < i$ such that $w_1\ldots w_j$ is a suffix of $w_1\ldots w_i$.

• Same as before, with $q_{|w|}$ added.

When $w = \epsilon$, the NFA is already deterministic, and contains only one state.

Question Source : http://cs.stackexchange.com/questions/66535

3200 people like this
Problem Detail:

On page 38 of "Lecture Notes in Computer Science" by Christoph M. Hoffmann, there is an algorithm (ALGORITHM 2).

I have some confusions.

Why it is written that an entry $M_{i,j}, j < i$, cannot be referenced? what is the meaning of "reference" here?

Thanks.

###### Asked By : Mike SQ

One possible explanation is that it is indicating that you should never try to read or use any matrix element $M_{i,j}$ where $i<j$. "Referenced" might be talking about "de-referencing" a pointer, i.e., reading a memory cell, i.e., reading an entry of the matrix. Perhaps the text is indicating that the matrix entries $M_{i,j}$ are only defined for $i \ge j$.

I suggest you read the text with this possible perspective in mind and see if it seems consistent with the surrounding context.

Question Source : http://cs.stackexchange.com/questions/60626

3200 people like this
Problem Detail:

I am using a sensing board able to detect magnetic signals between the board and a display.

I have a set of objects that are represented (each of them) by a unique set of points (magnets) with a particular shape. For example: object #1 is made by three points that form an equilateral triangle with side length 1cm; object #2 is made by three points that form a right triangle with sides 3cm, 4cm, 5cm; object #3 is made by three aligned points with distance 2cm; and so on. I can have a multiplicity of objects with unique patterns.

Now I have a list of points with the coordinates w.r.t. the Cartesian plane, and I need to match them referring to the patterns I got from the objects. I also know that every point must be matched, therefore I can minimize the overlapping errors. In practice, every point in the set can belong to maximum one object, and at the same time also it must belong to an object of the initial set.

Any idea on how to do that in an efficient way?

###### Answered By : Karolis Juodelė

In the general case a problem like this is NP, however in the vast majority of real cases it should be easy.

20 points make 1140 triangles so it shouldn't be hard to pick out the triangles most similar to your basic shapes (unless the shapes can be more complicated). A little ugly backtracking may be needed when the top scoring triangles overlap.

Also, if most magnets move continuously, you can easily map old points to new points and old triangles to new triangles.

What I'm talking about are fairly obvious methods. There may be smarter ways to do this, but you don't necessarily need them.

Question Source : http://cs.stackexchange.com/questions/66144

3200 people like this
Problem Detail:

I've seen the topic of the automorphism group appear in several introductory graph theory books I've looked at. It always feel oddly disjointed and poorly motivated to me.

Is there any practical (or impractical for that matter) applications of knowing the automorphism group of a graph?

###### Answered By : Stella Biderman

Automorphism capture a natural notion of symmetry of graphs. As a result, they can be used to speed up algorithms that would otherwise run slowly by chopping down the search space.

For example, integer programming is usually solved via branch-and-bound. However, if an equation is degenerate this can take far too long to run, because it has to keep checking symmetric parts of the tree. We can use graph automorphisms to compute the orbits of variables in the linear programming problem, and then treat parts with the same orbit as identical. A recent application of such techniques to MILP can be read here.

Question Source : http://cs.stackexchange.com/questions/65391

3200 people like this
Problem Detail:

Since parity game solving is in TFNP ("Total Function Nondeterministic Polynomial") (and the decision version is is NP ∩ coNP), I wonder whether it is contained in PLS ("Polynomial Local Search") or PPA ("Polynomial Parity Argument")? Add PPP ("Polynomial Pigeonhole Principle") if you want, even so this would probably mean that it is already contained in PPAD ("Polynomial Parity Arguments on Directed graphs"), and hence in PPA.

Or is it rather the other way round and parity game solving can be shown to be hard for PLS or PPAD? But that would be surprising, since a recursive algorithm that solves parity games is known (even if it is not efficient in the worst case).

###### Answered By : Rahul Savani

Yes, solving parity games is known to be in PPAD (and thus PPA and PPP too) and PLS, and is thus unlikely to be hard for either (since this would imply containment of one of these classes in the other).

See, e.g.,

Daskalakis, Constantinos, and Christos Papadimitriou. "Continuous local search." Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms. SIAM, 2011.

and combine the membership of Simple Stochastic Games (SSGs) in CLS (which is in PPAD and PLS) with the well-known observation that solving parity games can be reduced to solving SSGs in polynomial time.

The reason that these problems are in PPAD is that they admit "optimality equations", rather like Bellman equations, that characterize solutions as fixed points. The reason these problems are in PLS is that they can be solved with local improvement algorithms like strategy improvement (a two-player generalization of policy iteration for MDPs).

Question Source : http://cs.stackexchange.com/questions/64368

3200 people like this
Problem Detail:

I have 2 search algorithms and I have derived the following tight bound representations:

$$nlog(n)+mlog(n)$$

$$m∗n$$

Now i want to find a function $f(n)$ so that when $m$ is an element of tight bound $f(n)$, both algorithms have equal asymptotic run time.

Now I'm not sure if its just as simple as setting the two equations equal to each other and isolating 'm' or of it is more than tha

###### Answered By : Rick Decker

Your approach, for this problem at least, will work, but there are interesting things happening in the background. Simply setting the two terms equal and solving for $m$ will give you $$m=\frac{n\log n}{n-\log n}$$ However, I doubt that you want to substitute this back into the two original expressions and try to find a $g(n)$ so that each of your original expressions is in $O(g(n))$. Let's try something else.

For no particular reason, let's try letting $m=\log n$. Then your two expressions become \begin{align} n\log n+m\log n&=n\log n+\log^2n \in O(n\log n)\text{, and}\\ n\:m&=n\log n\in O(n\log n) \end{align} Hooray! If $m=f(n)=\log n$ we get a common upper bound with $g(n)=n\log n$.

Let's do the same thing, now with $m=n$. The two expressions are now \begin{align} n\log n+m\log n&=n\log n+n\log n \in O(n\log n)\text{, and}\\ n\:m&=n\; n\in O(n^2) \end{align} Drat. No common upper bound, but by transitivity, we have $n\log n+m\log n\in O(n^2)$ and so again, we find a common upper bound, namely the asymptoticly larger $g(n)=n^2$.

You can do this for all kinds of other $f(n)$, like $f(n)=\log\log n$ and even $f(n)=1$, both of which have common solutions $g(n)=n\log n$. In fact, it seems that we can do this for any $f$ whatsoever.

Question Source : http://cs.stackexchange.com/questions/64012

3200 people like this
Problem Detail:

Are there any special algorithms for maximum independent set of line graphs? Could this special case be in $\mathsf{P}$?

Finding a maximum independent set in $L(G)$ is equivalent to finding a maximum matching in $G$. For more, and a fast polynomial-time algorithm that work for e.g., line graphs, see [1].

Question Source : http://cs.stackexchange.com/questions/61164

3200 people like this
Problem Detail:

I want to add a constraint to a convex program, to guarantee some matrix $A$ to be positive semidefinite. How should I do it?

The library I am working with can cope with linear/ quadratic inequalities only.

By definition, $A$ is positive semidefinite iff $\forall x \in \mathbb C^n : x^T A x \geq 0$, but this is a set of inifinitely many constraints. So, my question is: how can I formulate it using a finitely many set of contraints and using linear/ quadratic inequalities only.

###### Asked By : Dudi Frid

A matrix $A$ is positive semidefinite if and only if there exists a matrix $V$ such that $$A = V^\top V.$$ So, you can use the entries of $V$ as your unknowns, and express each entry of $A$ as a quadratic function of the unknowns. Whenever you want to use $A$, instead rewrite that equation in terms of the entries of $V$.

Question Source : http://cs.stackexchange.com/questions/63013

3200 people like this
Problem Detail:

Ok, this is a follow up question to this, about the Erathostenes Sieve.

If we look at this paper (page 3, footnotes), the author says:

If we start crossing off at $p^2$ rather than $p$, the number of composites we cross off is $\frac{n}{p}−p+1$, but it makes no significant difference to our sum, because it only subtracts an irrelevant $O(n/ \log n)$ factor

The total cost counting crossings is:

$$\sum_{p\leq n}\bigl(\frac{n}{p}-p+1\bigr)=n\sum_{p\leq n}\frac{1}{p}-\sum_{p\leq n}p+\sum_{p\leq n}1$$

The first sum would give us the known $O(n\log\log n)$. But I don't see the second sum to be an "irrelevant $O(n/\log n)$", so how is that?

According to this post in math overflow an upper bound for the sum of the primes up to $n$ is: $$\frac{n^2}{2\log n} + O\left(\frac{n^2}{\log^2 n}\right).$$

###### Answered By : Yuval Filmus

The sum only goes up to $\sqrt{n}$. You can see that in the big display at the bottom of page 3, where only the first $\pi(\sqrt{n})$ primes are considered. Substituting $\sqrt{n}$ for $n$ in your formula, we get $n/\log n + O(n/\log^2 n)$.

Question Source : http://cs.stackexchange.com/questions/66261

3200 people like this
Problem Detail:

Wythoff's game is as follows: there are two players $A$ and $B$ ( $A$ being the first player ) and there are $2$ piles of stones. When his turn a player can remove one or more stones from anyone pile or same number of stones from both the piles. A player unable to make a move loses. Also it is assumed that both the players play optimally.

As mentioned in the link the losing configurations $(n_k,m_k)$ ( $n_k$ stones in one pile and $m_k$ stones in another ) are given by $$n_k = \lfloor( k*\phi) \rfloor$$ $$m_k = \lfloor( k*\phi^2) \rfloor$$ where $k$ is any natural number ( and $n_k \le m_k$ ) and $\phi=\frac{1+\sqrt{5}}{2}$. For example $\{1,2\}$ ( two stones in one pile and one stone in another ) is a losing position.

Modification to Wythoff's game"

In this modified game instead of ${\it2}$ piles there are ${\it 3}$ piles. And when his turn a player can move one or more stones from anyone pile or same number of stones from any $2$ piles or same number of stones from all the $3$ piles.
How do I compute the losing positions for this modified game efficiently ? The inefficient way of course is to find the grundy number of each configuration $\{a,b,c\}$. This method is inefficient because I want to calculate number of losing positions given that number of stones in each pile can be between $1$ and $1000$.

This is a project Euler question ( stone game ). I would appreciate hints only.

There is no closed form for $n=3$ piles. But one can solve the question efficiently using dynamic programming.

Question Source : http://cs.stackexchange.com/questions/57760

3200 people like this
Problem Detail:

Supposed I have a set of simple rules, e.g.

if A is between 10 and 20 and B is between 3 and 5, then use 3

and this rules are used in some kind of (more or less) mathematical formula (for example, A + B = C), would you count this as a form of reasoning or inference of a knowledge based system?

You have a knowledge base (the rules) and you get a result (a decision) by applying this rules in a specific way.

You describe what is called expert system in AI. As such, they are artifacts of articial intelligence research.

As far as the Wikipedia definition goes,

a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction,

which is vague enough to cover expert systems.

Is there any "intelligence" or independent reasoning in these systems? Certainly not; they are "stupid" in the sense that they just look up values in a table; no learning happens.

You will have to decide which frame of terminolgy reference you want to apply: academic AI or common sense?

Question Source : http://cs.stackexchange.com/questions/59494

3200 people like this
Problem Detail:

I have these 4 symbols with their probabilities:

x   P(x)  --------  1   0.3  2   0.3  3   0.2  4   0.2

I built the Huffman tree in this way:

and I obtainded:

x   P(x)   C(x)  ----------------  1   0.3     0  2   0.3     10  3   0.2     110  4   0.2     111

it's correct? Because according to the solution the results should be:

x   P(x)   C(x)  ----------------  1   0.3     00  2   0.3     01  3   0.2     10  4   0.2     11

Why? Yet I followed the steps shown here.

###### Answered By : Denis Pankratov

A quick way to check whether your answer has a chance of being correct is to compute the average code length. Your encoding gives the average length of $2.1$, which is greater than using a code of fixed length $2$, so it can't be correct.

If you follow the priority queue algorithm from the source you cite, then you would notice that after merging nodes 3 and 4 you get one supernode of priority 0.4. Now your queue would have three elements of priorities $0.3, 0.3,$ and $0.4$. Thus, you would next merge elements corresponding to priorities $0.3$ and $0.3$ (the algorithm works by merging two nodes with lowest priorities), which happen to be nodes 1 and 2.

Question Source : http://cs.stackexchange.com/questions/57960

3200 people like this
Problem Detail:

I was reading a paper and I came to the following :

"Since independent set is $Ω(n^{1−\epsilon})$-inapproximable unless P=NP (see [19]) for any fixed $\epsilon> 0$, the ..." where [19] is the following article : D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. In STOC, pages 681–690, 2006.

My question: is it true that independent set is $Ω(n^{1−\epsilon})$-inapproximable unless P=NP? Because I cannot find this result in the cited article. However, I can find the results about max clique and chromatic number. What I am missing?

Yes, this is true.

To see this, notice that cliques and independent sets are complementary. That is, a set of vertices $S$ is independent precisely when $S$ forms a clique in the complement of the graph. Intuitively, this means that if you had an approximation algorithm for finding a maximum clique, you could use it to find a maximum independent set by executing it on the complement graph.

Question Source : http://cs.stackexchange.com/questions/57536

3200 people like this
Problem Detail:

I don't fully understand this build-heap function. Lets assume we have array 3, 4, 5, 13, 16, 32.

It seems like we swap the parent when it is less than the current A[j] but which number does the loop start with? Maybe somebody can go through 2-3 loops and show how the array changes after 1st loop, 2nd loop, 3rd loop. Much appreciated. Oh, also, what would be the runtime?

for i=2 to n     j = i     while (j>1) and A[parent(j)]<A[j] do         swap A(parent(j)] and A[j]         j = parent(j)
###### Answered By : Rick Decker

It appears you're building a max-heap, where every element is greater than or equal to its child elements (if any). With that understanding, let's trace the action. First, the parent node of $A[j]$ will be $A[j/2]$ (integer division: discard any remainder) so we'll have $$\begin{array}{r|cccccccc} j & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \dotsc\\ parent(j) & \_ & 1 & 1 & 2 & 2 & 3 & 3 & \dotsc \end{array}$$ To keep things simple, we'll let the initial array be $[3,4,5,13]$:

Insert at $i=2$.

$A[parent(2)]=A[1]=3 < 4=A[2]$ so we swap $A[1]$ and $A[2]$, giving us the array $$[4,3,5,13]$$

Insert at $i=3$.

$A[parent(3)]=A[1]=4 < 5=A[3]$ so we swap $A[1]$ and $A[3]$, giving us the array $$[5,3,4,13]$$

Insert at $i=4$.

$A[parent(4)]=A[2]=3 < 13=A[4]$ so we swap $A[2]$ and $A[3]$, giving us the array $$[5,13,4,3]$$ and now $j=parent(4)=2>1$ so we see if we need another swap.

$A[parent(2)]=A[1]=5 < 13=A[2]$ so we swap $A[1]$ and $A[2]$, giving us the array $$[13,5,4,3]$$ and we're done, the array is now a max-heap.

The runtime of this algorithm is no worse than a multiple of $n\log n$ since none of the elements are further than $\log_2n$ from the root at $i=1$ so you'll need at most $\log n$ swaps for each of the $n$ elements. This, by the way, is not as good as possible: there's different algorithm that builds a heap in no more than a multiple of $n$ time.

Question Source : http://cs.stackexchange.com/questions/54427

3200 people like this
Problem Detail:

I have a string with length N. I would like to know how many segmentations are possible to it. Consider the example abcdc the number of N = 5 All possible segmentations are

['abcdc'] ['abcd', 'c'] ['abc', 'dc'] ['abc', 'd', 'c'] ['ab', 'cdc'] ['ab', 'cd', 'c'] ['ab', 'c', 'dc'] ['ab', 'c', 'd', 'c'] ['a', 'bcdc'] ['a', 'bcd', 'c'] ['a', 'bc', 'dc'] ['a', 'bc', 'd', 'c'] ['a', 'b', 'cdc'] ['a', 'b', 'cd', 'c'] ['a', 'b', 'c', 'dc'] ['a', 'b', 'c', 'd', 'c']

Then what will happen when my N tends to infinity . Any closed form equations?

There are n-1 points where you can break the string. Each is independent of the others. Therefore there are $2^{n-1}$ possibilities to break the string.

Question Source : http://cs.stackexchange.com/questions/54700

3200 people like this
Problem Detail:

Let's say we have a tree like this:

Suppose we are given $N$ distinct elements, $N$ being the number of vertices in the tree (in this case, $N=13$).

In how many ways can we distribute the given $N$ elements in the tree so that it has the max-heap property (any vertex is larger than its children)?

###### Answered By : Yuval Filmus

You can compute this recursively as follows. It will be convenient to denote (binary) trees using the following notation: a tree is either a leaf $\ast$ or is formed from two subtrees $(T_1,T_2)$. For a tree $T$, let $|T|$ denote the number of vertices in $T$, defined inductively by $|\ast|=1$ and $|(T_1,T_2)| = |T_1|+|T_2|+1$. Finally, let $N(T)$ denote the number of ways of allocating the integers $1,\ldots,|T|$ to $T$ so that the max-heap property is satisfied.

The base case is $N(\ast)=1$. Now consider a tree $T=(T_1,T_2)$. Clearly we must put $|T|$ at the root. Every filling out of the rest of the tree can be described as follows:

• Choosing a set $S_1$ of integers as labels for $T_1$. The other elements $S_2$ function as labels for $T_2$.
• Arranging $S_1$ in $T_1$ in a way satisfying the max-heap property.
• Same for $S_2$ and $T_2$.

The number of ways of accomplishing the second bullet is $N(|T_1|)$, and the number of ways of accomplishing the third bullet is $N(|T_2|)$; in both cases, the exact contents of $S_1$ and $S_2$ are not important. The number of choices in the first bullet is $\binom{|T|-1}{|T_1|}$, and we conclude that $$N((T_1,T_2)) = \binom{|T_1|+|T_2|}{|T_1|}N(T_1)N(T_2),$$ with initial case $N(\ast) = 1$.

Using this formula you can easily compute that for your tree the number of possibilities is $304128$ (I believe).

You can also "unroll" the formula: $$N(T) = \prod_{x=(x_1,x_2) \in T} \binom{|x_1|+|x_2|}{|x_1|},$$ where $x$ goes over all internal nodes in $T$. For example, for your tree we get (top to bottom, left to right) $$\binom{12}{5} \binom{4}{3} \binom{6}{5} \binom{2}{1} \binom{4}{1} \binom{2}{1} = 304128.$$

Question Source : http://cs.stackexchange.com/questions/53490

3200 people like this
Problem Detail:

Some time ago, I asked this question but no one quite understood it or was able to answer. I deleted the original question and have since decided to try it again.

As I understand it, all things digital are originally based on ones and zeros - binary code.

However, I have wondered for some time if it would be possible (now or in the future) to use the digits of pi (22/7) in place of the ones and zeros.

So, my question, is it possible? Could it ever be?

No it cannot happen for many reasons.
How would logic look like?
How would you add two numbers?
There is a big problem with telling apart $0$ and $1$ at high frequencies, so adding third option would be harder to manufacture. But encoding it with non-natural basis gets harder.
With non-natural base, all operations that we do are doomed.
If you consider that, try easy example: convert $4$ and $6.5$ into $\pi$ base, add them and write down result.
Since the finite precission kicks in, your basic addition fails. The only operation that would benefit from such base is $\pi + \pi$.

Question Source : http://cs.stackexchange.com/questions/53270

3200 people like this
Problem Detail:

In this paper,

http://www.vision.caltech.edu/malaa/publications/aly08realtime.pdf

the author mentioned "the vertical and horizontal focal lengths" but didn't have any clear definition.

My understanding about "focal length" is the distance between the lens and the image plane.

Does anyone have the idea about these focal lengths?

Thank you.

Note that they assume a pinhole camera model. The term 'focal length' means something different here than it does with a lens camera.

All you ever really need to know about the pinhole camera model can be expressed as: $$x_\mbox{imageplane} := x_\mbox{world}/z_\mbox{world}$$ $$y_\mbox{imageplane} := y_\mbox{world}/z_\mbox{world}$$ This describes how points in the scene project to an image plane 1 unit in front of the camera origin (center of projection & center of world coordinate system), with the optical axis along the z direction.

All the other values usually discussed in connection with pinhole cameras 'just' describe which rectangular part of the image plane is mapped to the rectangle of pixels that is your image. Or, conversely, how pixel coordinates can be transformed back into points on the image plane (and from there, can be transformed to directions in the scene).

Of course this rectangle-on-a-2d-plane can be defined by a 2d point $b$ and two 2d vectors $v_1, v_2$ such that:

$$\begin{bmatrix}x_\mbox{image}\\y_\mbox{image}\end{bmatrix} := \begin{bmatrix}\vec{v_1}&\vec{v_2}\end{bmatrix} \begin{bmatrix}x_\mbox{imageplane}\\y_\mbox{imageplane}\end{bmatrix} + b$$ If we abbreviate this as $(x_\mbox{image},y_\mbox{image}) = f(x_\mbox{imageplane},y_\mbox{imageplane})$ the part of the imageplane that your image depicts is the set $f^{-1}([0,\mbox{width}]\times[0,\mbox{height}])$ which, as I said, is a rectangle1.

The 'other values', for example:

• optical center: $c_x, c_y$ or $c_u, c_v$
• horizontal and vertical 'focal length': $f_x, f_y$ or $f_u, f_v$
• aspect ratio
• field of view angle $\alpha$
• shear (you don't usually want that)

are just used to describe special cases of the above formula.

In your case, $b = (c_u, c_v)$ and $v_1 = (f_u,0), v_2 = (0,f_v)$ such that

$$\begin{bmatrix}x_\mbox{image}\\y_\mbox{image}\end{bmatrix} := \begin{bmatrix}f_u x_\mbox{imageplane} + c_u\\f_v y_\mbox{imageplane} + c_v\end{bmatrix}$$

1 Note that it is not the rectangle with corners $b, b+v_1,b+v_2,b+v_1+v_2$. To find it you'd have to compute the inverse of the function $f$.

Question Source : http://cs.stackexchange.com/questions/50198

3200 people like this
Problem Detail:

So, untyped lambda calculus has the following formal grammar for its terms:

$$e::= x \mid \lambda x . e \mid e_1 e_2$$

Usually this is presented in some ML-esque language as (using de Bruijn indices)

data term = variable Nat | lambda term | apply term term

My question is: apply (variable Nat) term is syntactically valid, but the rator is just a free variable, isn't this an invalid expression? If not, what does it evaluate to?

###### Answered By : Anton Trunov

You can't evaluate (x t), where x is free, since evaluation ($\beta$-reduction) is usually defined in terms of substitution, but here you have neither a bound variable nor a function body for $t$ to be substituted into.

A term that cannot take a further step is called a normal form.

Question Source : http://cs.stackexchange.com/questions/49700

3200 people like this
Problem Detail:

On the 6th slide at https://web.stanford.edu/class/ee364b/lectures/bb_slides.pdf, while defining L2 and U2, why are we taking min for both?

###### Asked By : Deepankar Arya

The slides are correct: remember that we are trying to find a bound on the optimal value of $f$, which is defined as $$\Phi_\mbox{min}({\cal Q}_\mbox{init}) =\min_{x\in{\cal Q}_{\tiny\mbox{init}}}f(x)$$

To find that minimum, we approximate it by upper and lower bounds $\Phi_\mbox{ub}$ and $\Phi_{\mbox{lb}}$. Let's concentrate on $\Phi_{\mbox{ub}}$. Note that if ${\cal Q}={\cal Q}_1\cup{\cal Q}_2$, then both $$\min(\Phi_{\mbox{ub}}(\cal Q_1),\Phi_{\mbox{ub}}(\cal Q_2))\geq\Phi_\mbox{min}(\cal Q)$$ and $$\max(\Phi_{\mbox{ub}}(\cal Q_1),\Phi_{\mbox{ub}}(\cal Q_2))\geq\Phi_\mbox{min}(\cal Q)$$

But we might as well take $\min$, since that is the value that is closer to the global minimum $\Phi_\min(\cal Q)$!

In fact, taking the minimum is going to be necessary if we want the branch and bound method to converge to the minimum as we sub-divide $\cal Q$ into smaller and smaller pieces.

Question Source : http://cs.stackexchange.com/questions/47979

3200 people like this
Problem Detail:

We have Stocks in several discrete positions, let say:

A    B   C 40   20  80

And Demand, which may be satisfied by one or more Stock positions (if no specified then it would accept any Stock).

Demand for A=20 may be satisfied from A Stock only, and depletes 20 units from it. Demand for AB=20 may be satisfied from any of the A abd B Stocks, and may take partially from both, without any proportion limitations.

So for the Stocks example above:

This Demand may be satisfied (10 strict from A, 20 strict from B, and 30 AB would take from both)

AB  B   A 30  10  20

And this may not (insufficient B Stocks)

AB  B   B 30  10  20

This sounds like a more or less common problem which should have a known solution. I would be happy with any reference or idea. Also if the solution is resource heavy a cheaper approximation would be great.

###### Answered By : Tom van der Zanden

You can formulate this as a maximum flow problem, which you can solve using standard techniques.

Turn every stock in to a source node with capacity equal to the amount of stock. Any demand gets turned in to a sink with demand equal to its demand. Add edges between stocks and demands if the stock can fulfill the demand.

If you don't want to have more than one sink or source that is possible too: you create the stock and demand nodes as before, and create an edge between the sink and all stock nodes (with capacity equal to the amount of stock available), similarly create an edge between all demand nodes and the sink (with capacity equal to the demand).

Question Source : http://cs.stackexchange.com/questions/47803

3200 people like this
Problem Detail:

I A many one reducable to B and given A is decidable then is B decidable ?

preparing for an exam and please let me know if this holds

I understood how if B is decidable then A is decidable

and if A is not decidable then B is not decidable

###### Answered By : Rick Decker

If $A$ is reducible to $B$ and $A$ is decidable, then $B$ is not necessarily decidable.

Suppose, for example that $A=\{1\}$ and $B$ is, say, $\{\langle\, B\,\rangle\mid L(M)\text{ is infinite}\}$. It's well-known that $B$ is undecidable and of course $A$ is decidable (any finite language is decidable).

Now let $X$ be a TM for which $L(X)=\Sigma^*$. We could pick $X$ to be a one-state TM where the start state was also an accepting state and on every input symbol there was a transition from the start state to itself, for example. In a similar way, let $Y$ be a particular TM for which $L(Y)=\varnothing$.

We could then produce a many-one map from $A$ to $B$ by: $$f(w)=\begin{cases} \langle\, X\,\rangle & \text{if w=1}\\ \langle\, Y\,\rangle & \text{if w\ne 1} \end{cases}$$ Clearly this is a computable function and $w\in A\Longleftrightarrow f(w)\in B$, establishing a reduction from $A$ to $B$ with $A$ decidable and $B$ undecidable.

Question Source : http://cs.stackexchange.com/questions/47136

3200 people like this
Problem Detail:

I just learned about recurrences and I just can't solve this problem. I have this recurrence relation:

$$T(n) = \begin{cases} k\cdot T(\frac{n}{k}) & n > 0\\ 1 & n = 0\\ \end{cases}$$

where $k$ is a constant number.

I tried drawing a recurrence tree or replacing for lower $n$s but no success. I hope you can help me with an idea!

###### Answered By : Yuval Filmus

Suppose that $n$ is a power of $k$, say $n = k^t$. Then $$T(k^t) = kT(k^{t-1}) = k^2T(k^{t-2}) = \cdots = k^tT(1) = k^t,$$ assuming a base case of $T(1) = 1$. So for powers of $k$, we have $T(n) = n$. You can also prove that by induction: if $T(n/k) = n/k$ then $T(n) = kT(n/k) = k(n/k) = n$.

Question Source : http://cs.stackexchange.com/questions/43901

3200 people like this
Problem Detail:

Just stepping into complexity theory, I am befuddled by this notion of a certificate and can't find any utility of this concept.

From my understanding, a certificate is used when you are trying to ascertain whether a problem is NP...a problem is NP if you can verify a p-time solution exist or not (co-NP). I cannot see why you would need a certificate to perform this solution finding. And how is certificate different from "verification".

For example, if I were to try to prove whether all elements in a set is divisible by some number (say 3). Say I then brute force divide all elements by 3 and sees that indeed all elements in this set is divisible by 3. Now where in this process would a certificate come into play?

Also, how would you go about to physically construct a certificate?

###### Asked By : Beached Whale

Suppose you are the verifier for SUBSET-SUM: "given a set $S$ of integers, is there a subset of $S$ whose elements sum up to exactly zero?" To show the problem is in NP, you must be able to efficiently verify that a solution I give you is correct. As the verifier, you only care about verifying solutions (not "finding solutions"!). You do not care where that solution might come from or who constructs it. Now, when you are given a subset, you can easily sum up the elements and announce whether or not those elements sum up to 0.

The example you mention is a problem that is also in P (clearly, we can solve it in polynomial time). Every problem in P is also in NP. We don't have to care about the certificate, but instead we can simply solve the problem in polynomial time. (Check the definition of NP if this feels confusing, but hopefully it doesn't).

You might also enjoy the question How can I verify a solution to Travelling Salesman Problem in polynomial time?

Question Source : http://cs.stackexchange.com/questions/33776

3200 people like this
Problem Detail:

My confusion is that if the recursive call calls the left nodes, and then adds with the right nodes, how are the nodes that are to to right of the left nodes and vice versa being called?

int size (BinaryNode t) {   if (t == null)      return 0;   else     return 1 + size(t.left) + size(t.right);    }
###### Answered By : Luke Mathieson

Remember that to get the actual value of size(t.left), you have to evaluate the method size at the node t.left, i.e. assuming t.left has left and right children the algorithm with call size((t.left).left) and size((t.left).right).

Question Source : http://cs.stackexchange.com/questions/39893

3200 people like this
Problem Detail:

recently I've started thinking about caching problems in modern CPUs, where they struggle to adequately fetch program data (not instructions) in time, so that it can be computed further.
So then I came up with the idea, why don't we restrict random access on program data and make it all as a continuous stream (input and all eventual locals), which moves strictly in one direction through some "processing head", where you only have a possibility to read in some x amount of symols, write, let through and append symbols at the head position (this is a data stream, not an instruction stream).
The question is, whether you can effectively run computations in that system:
Is it possible to order all functions and allocate all local data/variables at compile time in such order that they will produce usable stream of data for all further functions, so that you will never need to "loop" the tape to access some particular element? (thus, emulating random access, by waiting for desired element to come by)
I've tried to look up this topic on the internet and found nothing. And maybe it is because this idea is obviously stupid, so no one bothers with it :)

No. Consider an algorithm on a graph (e.g., network flow). It fundamentally requires random access to memory. The restriction you propose would be painful and make many standard algorithms impossible.

Question Source : http://cs.stackexchange.com/questions/25828

3200 people like this
Problem Detail:

While reading Stallings OS Internals and Design, I run into problem. Here is example from the book.

For example, consider a simplified computer in which each instruction occupies one 16-bit word of memory. Assume that the PC is set to the location 300. On succeeding instruction cycles, it will fetch instructions from locations 301, 302, ...

My question is: If the length of instructions is 16bits, and the smallest addressable unit of the memory is 1B, why on succeeding instruction cycles it will fetch instructions from locations 301 and 302, and not multiples of two, 302 and 304?

Is the memory then organised as a sequence of 16bit long memory words?

I too read the book. If Dr. Stallings has quoted this, the memory locations 301 and 302 must be referring to word-wise and not byte-wise locations.

Question Source : http://cs.stackexchange.com/questions/22161

3200 people like this
Problem Detail:

In this table B is foreign key referring to the primary key A with on delete cascade option.What will be the result after deleting (4,3) and how A:1 4 2 9 3 5 B:3 3 4 2 9 4

###### Answered By : Grisha Weintraub

The result will be an empty table.

The steps are :

1. delete (4,3)
2. delete rows with B = 4 --> (2,4), (5,4)
3. delete rows with B = 2, B = 5 --> (9,2)
4. delete rows with B = 9 --> (3,9)
5. delete rows with B = 3 --> (1,3)

Question Source : http://cs.stackexchange.com/questions/14882

3200 people like this
Problem Detail:

I'm still insecure in the section decidability (no proof needed, I want to divine it):

X is decidable and Y is undecidable. Is the intersection of X and Y decidable or undecidable?

X is decidable and Y is a partial quantity of X. Is Y decidable, too or not?

Thanks!

###### Answered By : Rick Decker

For your first question, the answer is that $X\cap Y$ will not necessarily be decidable. Let $Y$ be an undecidable language over an alphabet $\Sigma$ and let $X=\Sigma^*$. Then obviously $X$ will be decidable and $X\cap Y=Y$ will be undecidable. On the other hand, the intersection may be decidable, as we could show by letting $X=\emptyset$.

For your second question, I assume that by "$Y$ is a partial quantity of $X$" you mean that $Y$ is a subset of $X$ (i.e., $Y\subseteq X$). Then we can do a similar construction, letting $X=\Sigma^*$, and $Y$ be undecidable, so again, $Y$ may or may not be decidable.

It's worth mentioning that this latter result frequently trips up students. Very few properties of languages are closed under subset/superset: there's no guarantee that a subset of a regular language will be regular, for example.

Question Source : http://cs.stackexchange.com/questions/27765

3200 people like this
Problem Detail:

Given a system of two Boolean Algebra equalities

a = b. c = d.

one can exhibit a single equation

F(a,b,c,d) = 0.

which is equivalent to the former system. (Symmetric difference is pivotal for constructing such quaternary operation F(a,b,c,d)).

Questions:

• Is there binary operation x ? y (infix notation) such that

a ? c = b ? d.

is equivalent to the original system?

• Is there binary operation x ? y such that

a ? b = c ? d.

is equivalent to the original system?

Edit (June 13): The question is more subtle than I managed to convey. Boolean algebra is a finitely definable variety. I was wondering if we can get a single axiom system by combining identities. Padmanabhan proved in 1968 that every definable class of lattices (such as BA) can be defined by a single identity within the class of lattices. The key observation in his method is equipping the identities

a = b. c = d.

with disjoint sets of variables. Then, simple disjunction

a v c = b v d.

would do.

###### Asked By : Tegiri Nenashi

No. There is no such binary operation.

This is easy to prove with a counting argument. Given a candidate binary operation $?$, count the number $n$ of boolean values $x,y$ such that $x?y$ is true. Out of the 16 possible values for $a,b,c,d$, exactly $m=n^2+(4-n)^2$ of them will satisfy $a?c = b?d$. The possible values for $m$ are $m \in \{16,10,8\}$. However, there are only 4 possible values of $a,b,c,d$ such that $a=b$ and $c=d$, which is not in the set of possible values for $m$.

Or, you could do as David Clarke suggests and write a tiny program to enumerate all possibilities. Frankly, you probably should have done that before asking in any case; that would have helped you ask a more specific question, such as "I know no such binary operation exists; can you give me any insight why not?".

Question Source : http://cs.stackexchange.com/questions/26472

3200 people like this
Problem Detail:

I have a list of millions of pairs of strings, and I want to join all of the pairs that have matching members into lists without duplicates.

Example input:

[["A", "B"],  ["A", "D"],  ["M", "Q"],  ["A", "F"],  ["D", "E"],  ["Q", "Z"]]

Example output:

[["A", "B", "D", "E", "F"],  ["M", Q", "Z"]]

Does anyone know of an efficient algorithm for this? I'm somewhat constrained by memory. Anything that would square the memory from the input would not be an option.

###### Asked By : Cory Gagliardi

You can use a two pass approach:

In the first pass, identify all the different strings appearing in your input. (This can be done in various ways, e.g. hashing, trie, BST)

For the second pass initialize a Disjoint-set data structure with the strings found in the first pass and perform a union operation for each pair in the input.

Question Source : http://cs.stackexchange.com/questions/26400

3200 people like this
Problem Detail:

Input: 2 RB Trees A B, so that both values in a certain range, so that B's range is smaller than A in both sides. Ex. B's range can be [100...200] and A's range is [0...1000]
Output: Unite A and B to one RB Tree AB.

How can I unite these 2 trees in $O(log(n))$ time and with additional space $O(1)$?

I have tried splitting tree A into 3 trees A1,A2,A3, so that: A1 < A2 , B < A3
A2 contains all that values that are equal-bigger than Min(B) and smaller-equal to Max(B). I did it using the RB Tree Split algorithm, and all the splitting sums up to $O(log(n))$ time.

Now I know what to do with A1 and A3 but what should I do with A2 and B? I know I should use the fact that B contains A2's range but how does it help me?

You can't. As shown by Brown and Tarjan on the first page of "A Fast Merging Algorithm", merging two sequences with overlapping ranges has a worst-case bound of $\Theta (m \lg (n/m))$ comparisons.

Question Source : http://cs.stackexchange.com/questions/26265

3200 people like this
Problem Detail:

I'm confused about a solution I saw about the following language not being regular:\begin{equation*} L=\{0^n ~1 ~2^n : n >0\} \end{equation*} The example solution said that $L$ was "the same as":\begin{equation*} L'=\{0^n1^n : n > 0 \} \end{equation*} so it wasn't regular. I understand why $L'$ isn't regular, but I don't understand how $L$ is the same as $L'$. How can you eliminate that middle symbol?

###### Answered By : Yuval Filmus

The language $L'$ is the "same" as $L$ (in fact, it's an asymmetric relation) in two steps. If $L$ were regular, then so would be $L'' = \{0^n 2^n : n > 0 \}$ (take a DFA for $L$ and replace $1$ edges with $\epsilon$ edges). In its turn, $L''$ is the same as $L'$, up to change of symbols.

Question Source : http://cs.stackexchange.com/questions/16122

3200 people like this
Problem Detail:

Let consider well-known problem in distributed computing Broadcast Problem in network with stopping crashes.

The most appropriate solution to the broadcast problem is flooding algorithm, when the source of the message just send it to all available nodes of the network on the first round, if there are crashes on the edges some number of message will fail. Then on the second round, the nodes that received messages from the previous round send it to all available nodes and so on.

Let consider slightly different model, when on every round the source node with message is capable to send a message to only one single node of course stopping failure might occur. It's very useful model, which can represent a sort of controllable or limited spreading.

Unfortunately I don't know the formal name of the model. Do you familiar with result in the model of "limited spreading", I am sure there are must some work in this direction.

Both models you presented are the same with a slight modification. Theoreticians usually don't care about the difference between both except when calculating time complexity (or number of sent messages in radio networks sometimes).

The first model you presented, broadcast communication, the broadcast operation is only allowed. That is, a node $v$ may send at one time slot a message to all its neighbours, denoted $N(v)$.

In the second model, point-to-point communication, for a node to send a message to all its neighbours, it needs to send $|N(v)|$ messages each of which is at a different time slot. Therefore, we emulate the broadcast operation.

In the broadcast model, in order to emulate the point-to-point communication from $v$ to $u$, a node $v$ broadcasts a message that contains the identifier of $u$. If $u$ receives the message and finds that it contains its identifier, then it accept it. Otherwise, it just throw it.