# [ACCEPTED]-Check two List<int>'s for the same numbers-list

You can use the .net 3.5 .Intersect() extension 1 method:-

```
List<int> a = new List<int>() { 1, 2, 3, 4, 5 };
List<int> b = new List<int>() { 0, 4, 8, 12 };
List<int> common = a.Intersect(b).ToList();
```

Jeff Richter's excellent PowerCollections 2 has Set with Intersections. Works all the 1 way back to .NET 2.0.

http://www.codeplex.com/PowerCollections

```
Set<int> set1 = new Set<int>(new[]{1,2,3,4,5});
Set<int> set2 = new Set<int>(new[]{0,4,8,12});
Set<int> set3 = set1.Intersection(set2);
```

You could do it the way that LINQ does it, effectively 9 - with a set. Now before 3.5 we haven't 8 got a proper set type, so you'd need to 7 use a `Dictionary<int,int>`

or something like that:

- Create a
`Dictionary<int, int>`

and populate it from list`a`

using the element as both the key and the value for the entry. (The value in the entry really doesn't matter at all.) - Create a new list for the intersections (or write this as an iterator block, whatever).
- Iterate through list b, and check with dictionary.ContainsKey: if it does, add an entry to the list or yield it.

That should 6 be O(N+M) (i.e. linear in both list sizes)

Note 5 that that will give you repeated entries 4 if list b contains duplicates. If you wanted 3 to avoid that, you could always change the 2 value of the dictionary entry when you first 1 see it in list `b`

.

You can sort the second list and loop through 2 the first one and for each value do a binary 1 search on the second one.

If both lists are sorted, you can easily 11 do this in O(n) time by doing a modified 10 merge from merge-sort, simply "remove"(step 9 a counter past) the lower of the two leading 8 numbers, if they are ever equal, save that 7 number to the result list and "remove" both 6 of them. it takes less than n(1) + n(2) steps. This 5 is of course assuming they are sorted. But 4 sorting of integer arrays isn't exactly 3 expensive O(n log(n))... I think. If you'd 2 like I can throw together some code on how 1 to do this, but the idea is pretty simple.

In comment to question author said that 8 there will be

Max 15 in the first list and 7 20 in the second list

In this case I wouldn't 6 bother with optimizations and use List.Contains.

For 5 larger lists hash can be used to take advantage 4 of O(1) lookup that leads to O(N+M) algorithm 3 as Jon noted.

Hash requires additional space. To 2 reduce memory usage we should hash shortest 1 list.

```
List<int> a = new List<int>() { 1, 2, 3, 4, 5 };
List<int> b = new List<int>() { 0, 4, 8, 12 };
List<int> shortestList;
List<int> longestList;
if (a.Count > b.Count)
{
shortestList = b;
longestList = a;
}
else
{
shortestList = a;
longestList = b;
}
Dictionary<int, bool> dict = new Dictionary<int, bool>();
shortestList.ForEach(x => dict.Add(x, true));
foreach (int i in longestList)
{
if (dict.ContainsKey(i))
{
Console.WriteLine(i);
}
}
```

Tested on 3.0

```
List<int> a = new List<int>() { 1, 2, 3, 4, 5, 12, 13 };
List<int> b = new List<int>() { 0, 4, 8, 12 };
List<int> intersection = new List<int>();
Dictionary<int, int> dictionary = new Dictionary<int, int>();
a.ForEach(x => { if(!dictionary.ContainsKey(x))dictionary.Add(x, 0); });
b.ForEach(x => { if(dictionary.ContainsKey(x)) dictionary[x]++; });
foreach(var item in dictionary)
{
if(item.Value > 0)
intersection.Add(item.Key);
}
```

0

var c = a.Intersect(b);

This only works in 1 3.5 saw your requirement my appologies.

The method recommended by ocdecio is a good 9 one if you're going to implement it from 8 scratch. Looking at the time complexity 7 compared to the nieve method we see:

Sort/binary 6 search method: T ~= O(n log n) + O(n) * O(log 5 n) ~= O(n log n)

Looping through both lists 4 (nieve method): T ~= O(n) * O(n) ~= O(n 3 ^ 2)

There may be a quicker method, but I 2 am not aware of it. Hopefully that should 1 justify choosing his method.

Here is a method that removed duplicate 5 strings. Change this to accomidate int and 4 it will work fine.

```
public List<string> removeDuplicates(List<string> inputList)
{
Dictionary<string, int> uniqueStore = new Dictionary<string, int>();
List<string> finalList = new List<string>();
foreach (string currValue in inputList)
{
if (!uniqueStore.ContainsKey(currValue))
{
uniqueStore.Add(currValue, 0);
finalList.Add(currValue);
}
}
return finalList;
}
```

Update: Sorry, I am actually 3 combining the lists and then removing duplicates. I 2 am passing the combined list to this method. Not 1 exactly what you are looking for.

*(Previous answer - changed IndexOf to Contains, as IndexOf casts to an array first)*

Seeing as it's two small lists the code 7 below should be fine. Not sure if there's 6 a library with an intersection method like 5 Java has (although List isn't a set so it 4 wouldn't work), I know as someone pointed 3 out the PowerCollection library has one.

```
List<int> a = new List<int>() {1, 2, 3, 4, 5};
List<int> b = new List<int>() {0, 4, 8, 12};
List<int> result = new List<int>();
for (int i=0;i < a.Count;i++)
{
if (b.Contains(a[i]))
result.Add(a[i]);
}
foreach (int i in result)
Console.WriteLine(i);
```

**Update 2:** HashSet 2 was a dumb answer as it's 3.5 not 3.0

**Update**: HashSet 1 seems like the obvious answer:

```
// Method 2 - HashSet from System.Core
HashSet<int> aSet = new HashSet<int>(a);
HashSet<int> bSet = new HashSet<int>(b);
aSet.IntersectWith(bSet);
foreach (int i in aSet)
Console.WriteLine(i);
```

Wow. The answers thus far look very complicated. Why 3 not just use :

```
List<int> a = new List<int>() { 1, 2, 3, 4, 5, 12, 13 };
List<int> b = new List<int>() { 0, 4, 8, 12 };
...
public List<int> Dups(List<int> a, List<int> b)
{
List<int> ret = new List<int>();
foreach (int x in b)
{
if (a.Contains(x))
{
ret.add(x);
}
}
return ret;
}
```

This seems much more straight-forward 2 to me... unless I've missed part of the 1 question. Which is entirely possible.

More Related questions

We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.