performance - Efficiency of std::min in c++ -
i implementing floyd-warshall's algorithm in c++, mooc on coursera, in c++. using std::min find minimum between current distance between 2 nodes , jump included (according standard pseudo code of algorithm.) found use of std::min increased runtime of program compared use of if condition.
e.g.
if(a[i][j] > a[i][k]+a[k][j]) a[i][j] = a[i][k]+a[k][j];
instead of a[i][j] = min (a[i][j], a[i][k] + a[k][j]);
i ran both these programs these test-cases https://github.com/beaunus/stanford-algs/tree/master/testcases/course4/assignment1allpairsshortestpath , found on larger test-cases, code std::min took twice thrice time compared code if conditions.
[i'm aware similar question has been asked, have real answer.] edit: have marked "here" substituted if conditions "std::min" , performance difference noticeable. complete code here:
//this implements floyd warshall algorithm pairs- shortest paths. #include<cstdio> #include<algorithm> const int inf=10000, nmax=2048; //according constraints of test-files. using namespace std; int a[nmax+1][nmax+1]; int main() { int n, m; //n= number of vertices, m = number of edges in directed graph. scanf("%d %d",&n,&m); //initializing array of distances. for(int i=1; i<=n; i++) { for(int j=1; j<=n; j++) { a[i][j]=inf; //this makes distance every node every other node = inf. } a[i][i]=0; //distance every node zero. } //taking input of m edges , storing them. int x, y, dist; for(int i=0; i<m; i++) { scanf("%d %d %d",&x,&y,&dist); if(a[x][y]>dist) // here. a[x][y]=dist; // there edge weight "dist", tail x , head y. } for(int k=1; k<=n; k++) { for(int i=1; i<=n; i++) { for(int j=1; j<=n; j++) { if(a[i][j] > a[i][k]+a[k][j]) //here. a[i][j] = a[i][k]+a[k][j]; } } } //this finds minimum distance in shortest paths. int mindist=0, mini, minj; for(int i=1; i<=n; i++) { if(a[i][i]<0) //this happens iff there negative cycle. { printf("null\n"); return 0; } for(int j=1; j<=n; j++) //find min shortest path distance between pair of nodes. { if(mindist > a[i][j]) //here { mindist= a[i][j]; } } } printf("%d\n",mindist); return 0; }
the main difference here in second form (essentially) forcing update of every element (though may no change update).
in first version perform updates required.
it depends on architecture if array large , number of cases second term smaller relatively low causing unnecessary memory writes.
one way investigate replace min
conditional operator ?:
. if that's case performance same because overhead isn't in min()
it's in unnecessarily writing value of a[i][j]
on itself.
you might consider creating reference a[i][j]
as:
auto &v(a[i][j]);
to minimize index calculations (the optimizer may can't hurt it).
auto &v(a[i][j]); if(v > a[i][k]+a[k][j]) { v = a[i][k]+a[k][j]; }
nb: regard local reference. reasonable compiler should (with optimization on) spot , emit amounts same code. if you're not familiar optimization , learning may not time go that. common objection hand optimizing readability. i'd argue in case optimized version if more readable anyway.
Comments
Post a Comment