Ramda Recommendation For Removing Duplicates From A Slightly Nested Array
Solution 1:
I can't easily test it on my phone, but something like this should work:
pipe(
groupBy(prop('id')),
map(pluck('failedReason')),
map(flatten),
map(uniq)
)
Update
I just got around to looking at this on a computer, and noted that the output wasn't quite what you were looking for. Adding two more steps would fix it:
pipe(
groupBy(prop('id')),
map(pluck('failedReason')),
map(flatten),
map(uniq),
toPairs,
map(zipObj(['id', 'failedReason']))
)
You can see this in action on the Ramda REPL.
Solution 2:
You could define a wrapper type which satisfies the requirements of Monoid. You could then simply use R.concat
to combine values of the type:
//Thing :: { id ::String, failedReason ::ArrayString } ->ThingfunctionThing(record) {
if(!(thisinstanceofThing))returnnewThing(record);this.value= {id:record.id, failedReason:R.uniq(record.failedReason)};
}
//Thing.id ::Thing->StringThing.id=function(thing) {
returnthing.value.id;
};//Thing.failedReason ::Thing->ArrayStringThing.failedReason=function(thing) {
returnthing.value.failedReason;
};//Thing.empty ::()->ThingThing.empty=function() {
returnThing({id:'', failedReason: []});
};//Thing#concat::Thing~>Thing->ThingThing.prototype.concat=function(other) {
returnThing({
id:Thing.id(this)||Thing.id(other),
failedReason:R.concat(Thing.failedReason(this), Thing.failedReason(other))
});
};//f ::Array { id ::String, failedReason ::ArrayString }
//->Array { id ::String, failedReason ::ArrayString }
varf=R.pipe(R.map(Thing),R.groupBy(Thing.id),R.map(R.reduce(R.concat,Thing.empty())),R.map(R.prop('value')),R.values);f([
{id:'001', failedReason: [1000]},
{id:'001', failedReason: [1001]},
{id:'001', failedReason: [1002]},
{id:'001', failedReason: [1000]},
{id:'001', failedReason: [1000, 1003]},
{id:'002', failedReason: [1000]}
]);//=> [{"id":"001", "failedReason": [1000, 1001, 1002, 1003]},
// {"id":"002", "failedReason": [1000]}]
I'm sure you could give the type a better name than Thing. ;)
Solution 3:
For fun, and mainly to explore the advantages of Ramda, I tried to come up with a "one liner" to do the same data conversion in plain ES6... I now fully appreciate the simplicity of Scott's answer :D
I thought I'd share my result because it nicely illustrates what a clear API can do in terms of readability. The chain of piped map
s, flatten
and uniq
is so much easier to grasp...
I'm using Map
for grouping and Set
for filtering duplicate failedReason
.
const data = [ {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1001]}, {id: "001", failedReason: [1002]}, {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1000, 1003]}, {id: "002", failedReason: [1000]} ];
const converted = Array.from(data
.reduce((map, d) => map.set(
d.id, (map.get(d.id) || []).concat(d.failedReason)
), newMap())
.entries())
.map(e => ({ id: e[0], failedReason: Array.from(newSet(e[1])) }));
console.log(converted);
If at east the MapIterator
and SetIterator
s would've had a .map
or even a.toArray
method the code would've been a bit cleaner.
Post a Comment for "Ramda Recommendation For Removing Duplicates From A Slightly Nested Array"