Good that you solved the first part. Perhaps you could share the solution, others might benefit from it too.
About the second part, I interpret your question as:
"Will this procedure change the point to which the calculation converges? I.e., will it now converge to a different result than if it was left to run flat out all the way?"
In principle it will converge to a different point, since when you restore you lose the mixing history. However, if the convergence is stable, the difference should be negligible and just a matter of numerical accuracy. To improve the quality, you may in this case want to converge to a lower tolerance than the default (1e-5), just to be sure.
And, at the end of the day, to be really sure, you had better do a test to verify this, by simply letting it run, and then start over, interrupt it and restart, and compare the physical quantities (transmission etc).
I must however say that I don't really see how your approach would be useful... The calculations will never converge to anything wrong, unless it's a particular case of converging to zero charge (a common headache), but that usually happens within the first 5-6 iterations anyway. Otherwise, it's either a matter of converging or not converging, meaning running endless iterations. The latter is a problem, but the only way to discover it is to ... run the calculation until it converges. Or rather, inspect the convergence patterns (the dRho and dEtot values). If the scf loop is on a path to convergence, they will decrease steadily (usually after an initial period of wobbling and stabilizing slowly), or they will just stay at values like 1e+01 for a long time. But then you should just abort and retune the parameters.
Thus, I don't see any way by which you can inspect the results after, say, the 17th iteration, and judge their quality. I mean, what is the criterion by which you will say the results are "good"?