Discussion:
Speed Comparison Perl Python & C
(too old to reply)
Bart Nessux
2004-02-29 17:44:51 UTC
Permalink
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.

Here are my results:

C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds

Here are the programs:

-------------------------
#The C version:
-------------------------

#include <stdio.h>

int main(void)
{
int x = 0;
while (x < 1000000) {
printf("%d \n", x++);
}
}

-------------------------
#The Python version:
-------------------------

#!/usr/bin/python

x = 0
while x < 1000000:
x = x + 1
print x

-------------------------
#The Perl version:
-------------------------

#!/usr/bin/perl -Tw

use strict;

my $x = 0;
while($x < 1000000) {
print $x++, "\n";
}

What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
Bob Ippolito
2004-02-29 17:54:52 UTC
Permalink
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
----
Post by Bart Nessux
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
What exactly are you trying to benchmark here? How fast your console
is? Which language buffers stdout the best?

-bob
Ville Vainio
2004-02-29 17:56:18 UTC
Permalink
Bart> fair test. I thought C would do this much faster than it
Bart> did. Any ideas?

Your program prints stuff at every iteration, so it mostly measures I/O
performance. Drop out the print, or print every 1000th item, and you
will start seeing a significant differences...

This is also a textbook example of a case where using C doesn't make
much sense performace-wise. Many seem to believe that using C/C++ also
magically speeds up I/O, and choose it over alternatives because their
network server "needs to be fast".
--
Ville Vainio http://tinyurl.com/2prnb
Bart Nessux
2004-02-29 20:27:56 UTC
Permalink
Post by Ville Vainio
Bart> fair test. I thought C would do this much faster than it
Bart> did. Any ideas?
Your program prints stuff at every iteration, so it mostly measures I/O
performance. Drop out the print, or print every 1000th item, and you
will start seeing a significant differences...
This is also a textbook example of a case where using C doesn't make
much sense performace-wise. Many seem to believe that using C/C++ also
magically speeds up I/O, and choose it over alternatives because their
network server "needs to be fast".
I would be one of the many that you refer to. I've listened to too many
people talk about the speed of C compared to python or other interpreted
languages. Obviously, these people talked of this speed difference to be a
generality (anything done is C is *much* faster than anything equivalent in
Python). My test doesn't support that. It is faster, but not significantly.

I think C is *significantly* faster when it's being used by someone who
*knows* what they're doing and more importantly, they understand *why* C is
faster for the task. That's something that I currently lack... that
understanding. That's why I experiment with the two and post to this group.

Thanks,
Bart
Ville Vainio
2004-02-29 20:36:10 UTC
Permalink
Bart> I think C is *significantly* faster when it's being used by
Bart> someone who *knows* what they're doing and more importantly,
Bart> they understand *why* C is faster for the task. That's

I don't think knowing what you are doing matters all that much. When
you have to make a database query, you have to make database
query. Actually, Python code might even be faster in circumstances
like these, because you can write more sensible algorithms w/o
investing a week of debugging. So caching data in memory becomes more
appealing.

C is extremely fast when you are doing stuff that requires intensive
computation, loops with millions of iterations etc. Luckily this also
applies to Python modules written in C.
--
Ville Vainio http://tinyurl.com/2prnb
Cameron Laird
2004-03-01 00:18:26 UTC
Permalink
In article <***@amadeus.cc.tut.fi>,
Ville Vainio <***@spammers.com> wrote:
.
.
.
Post by Ville Vainio
query. Actually, Python code might even be faster in circumstances
like these, because you can write more sensible algorithms w/o
investing a week of debugging. So caching data in memory becomes more
appealing.
.
.
.
Yup; I'm now sufficiently arrogant/provocative/confident/experienced/...
to tell people to expect that their Python codings will be *faster* than
what they achieve with C.
--
Cameron Laird <***@phaseit.net>
Business: http://www.Phaseit.net
Dave Cole
2004-07-07 04:42:35 UTC
Permalink
Post by Cameron Laird
.
.
.
Post by Ville Vainio
query. Actually, Python code might even be faster in circumstances
like these, because you can write more sensible algorithms w/o
investing a week of debugging. So caching data in memory becomes more
appealing.
.
.
.
Yup; I'm now sufficiently arrogant/provocative/confident/experienced/...
to tell people to expect that their Python codings will be *faster* than
what they achieve with C.
I can cite an example of this from my own experience.

My first implementation of the Sybase DB-API implemented most of the
interface in C for speed. I had only been using Python for a couple of
weeks when I developed that version. After a while I started to think
about supporting array binding to fetch results from the server more
than one row at a time. The amount of work necessary to make the C code
support array binding was more than I was prepared to undertake.

I wanted to support array binding so I decided to simply wrap the Sybase
CT API and implement the DB-API in Python on top of the wrapping. This
was a huge win because I was able to implement a feature that would
probably have been too hard in plain C (at least for my puny brain).

So I agree, in some (maybe even many) cases the Python implementation
will be faster than the C implementation purely because of the increased
sophistication of the solutions you are able to implement.

- Dave
--
http://www.object-craft.com.au
David Lees
2004-02-29 18:00:29 UTC
Permalink
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
-------------------------
-------------------------
#include <stdio.h>
int main(void)
{
int x = 0;
while (x < 1000000) {
printf("%d \n", x++);
}
}
-------------------------
-------------------------
#!/usr/bin/python
x = 0
x = x + 1
print x
-------------------------
-------------------------
#!/usr/bin/perl -Tw
use strict;
my $x = 0;
while($x < 1000000) {
print $x++, "\n";
}
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
I don't think your times have much to do with the languages. They are
just how long whatever I/O library is used by a particular language
takes. I would guess that the loop overhead is small compared with the
I/O formatting and output times in your example.

You should do a Google search on 'Python+benchmarks'. Benchmarking is
tricky and you need to consider what you are trying to compare.

David Lees
Bart Nessux
2004-02-29 20:19:52 UTC
Permalink
Post by David Lees
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
-------------------------
-------------------------
#include <stdio.h>
int main(void)
{
int x = 0;
while (x < 1000000) {
printf("%d \n", x++);
}
}
-------------------------
-------------------------
#!/usr/bin/python
x = 0
x = x + 1
print x
-------------------------
-------------------------
#!/usr/bin/perl -Tw
use strict;
my $x = 0;
while($x < 1000000) {
print $x++, "\n";
}
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
I don't think your times have much to do with the languages. They are
just how long whatever I/O library is used by a particular language
takes. I would guess that the loop overhead is small compared with the
I/O formatting and output times in your example.
You should do a Google search on 'Python+benchmarks'. Benchmarking is
tricky and you need to consider what you are trying to compare.
David Lees
Thanks, I was trying to do something very similar in each language. Just
wanted to see how fast they could do this, nothing more. I expected the
order of the results to be c, python and then perl, and I expected c to be
the *clear* winner. I don't know that much about programming... that's why
I have all these false notions in my head about speed.
Bob Ippolito
2004-02-29 20:38:31 UTC
Permalink
Post by Bart Nessux
Post by David Lees
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
-------------------------
-------------------------
#include <stdio.h>
int main(void)
{
int x = 0;
while (x < 1000000) {
printf("%d \n", x++);
}
}
-------------------------
-------------------------
#!/usr/bin/python
x = 0
x = x + 1
print x
-------------------------
-------------------------
#!/usr/bin/perl -Tw
use strict;
my $x = 0;
while($x < 1000000) {
print $x++, "\n";
}
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
I don't think your times have much to do with the languages. They are
just how long whatever I/O library is used by a particular language
takes. I would guess that the loop overhead is small compared with the
I/O formatting and output times in your example.
You should do a Google search on 'Python+benchmarks'. Benchmarking is
tricky and you need to consider what you are trying to compare.
David Lees
Thanks, I was trying to do something very similar in each language. Just
wanted to see how fast they could do this, nothing more. I expected the
order of the results to be c, python and then perl, and I expected c to be
the *clear* winner. I don't know that much about programming... that's why
I have all these false notions in my head about speed.
The true notion of speed is that it's relative. In general, don't
worry too much about it until it becomes a problem, or until you have a
lot of free time to go optimizing things that *already work*.
Premature optimization is just about as bad as it sounds.

-bob
Nick Patavalis
2004-02-29 17:58:27 UTC
Permalink
Post by Bart Nessux
Each Program counts to 1,000,000 and prints each number to the
console as it counts.
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
This is _not_ a meaningfull test! What you are actually measuring, is
the performance of your terminal, not the performance of the
language. Try the same, or a similar, test _without_printing_ (just
count to 1,000,000) and you'll see a huge differene!

/npat
Jeremy Yallop
2004-02-29 18:09:05 UTC
Permalink
Post by Bart Nessux
Each Program counts to 1,000,000 and prints each number to the
console as it counts.
Run the programs again, redirecting the output to a file.

Jeremy.
Dennis Lee Bieber
2004-02-29 19:21:05 UTC
Permalink
On Sun, 29 Feb 2004 12:44:51 -0500, Bart Nessux
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
Given that the slowest part of those programs is the conversion
from internal integer to printable format, followed by the actual I/O
operation... and that, at the lowest level, Python uses the C I/O
libraries, that isn't too surprising. All you've shown is the overhead
on top of the C I/O system.

You might want to read

http://www.pythonsoft.com/empirical_data.pdf

(Took me a while to find that -- the original author's site seems to
have closed down)

--
Post by Bart Nessux
============================================================== <
============================================================== <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <
Bart Nessux
2004-02-29 20:28:53 UTC
Permalink
Post by Dennis Lee Bieber
On Sun, 29 Feb 2004 12:44:51 -0500, Bart Nessux
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
Given that the slowest part of those programs is the conversion
from internal integer to printable format, followed by the actual I/O
operation... and that, at the lowest level, Python uses the C I/O
libraries, that isn't too surprising. All you've shown is the overhead
on top of the C I/O system.
You might want to read
http://www.pythonsoft.com/empirical_data.pdf
(Took me a while to find that -- the original author's site seems to
have closed down)
--
Post by Bart Nessux
============================================================== <
============================================================== <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <
Thanks for the link, looks like a good read.
b***@aol.com
2004-03-01 18:35:32 UTC
Permalink
Bart Nessux <***@hotmail.com> wrote in message news:<c1t89m$a3u$***@solaris.cc.vt.edu>...

<SNIP>

As others pointed out, your programs mostly test the speed of I/O.
Here is a simple comparison of the speed of integer arithmetic using
Python and Fortran 95 (Compaq Visual Fortran 6.6, with full
optimization).

Python:

i = 1
j = 0
while i < 100000000:
j = j + i
i = i + 1
print j

Fortran 95:

program add
integer*8 :: i,j
j = 0
do i=1,99999999
j = j + i
end do
print*,j
end program add

numerical result: 4999999950000000
elapsed times in seconds, using timethis.exe command on Windows XP
Professional:

Python 93.4
Fortran 0.28

The Fortran program is more than 300 times faster. The benchmarks at
http://www.polyhedron.co.uk/compare/win32/f90bench_p4.html suggest
that the Fortran/Python speed gap is even larger if the Intel Fortran
compiler is used.
The speed ratio of the fastest to slowest Fortran compiler is about
two, so ANY Fortran compiler beats Python by more than two orders of
magnitude here.

The Intel Fortran compiler for Linux can be obtained free for
noncommercial use at
http://www.intel.com/software/products/compilers/flin/noncom.htm .
Richard Brodie
2004-03-01 18:56:56 UTC
Permalink
Post by b***@aol.com
The Fortran program is more than 300 times faster.
Yes. Although this example does suit very aggressive optimization.
It doesn't quite precompute the answer but there is some fancy
parallelisation going on.
Neil Hodgson
2004-03-01 21:18:09 UTC
Permalink
Post by b***@aol.com
As others pointed out, your programs mostly test the speed of I/O.
Here is a simple comparison of the speed of integer arithmetic using
Python and Fortran 95 (Compaq Visual Fortran 6.6, with full
optimization).
It actually compares the speed of integers large enough to need 64 bits
of precision where Fortran can use a 64 bit integer and Python uses an
unbounded integer. The test can be sped up in two ways, first by using
floating point (which is fixed length in Python) and by using Psyco. On my
machine:

Original test: 245 seconds
Using floats: 148 seconds
Using psyco with integers: 69 seconds
Using psyco with floats: 18.4 seconds

Therfore gaining a speed up of 13 times. This leads to Fortran likely
remaining 22 times faster.

from psyco.classes import *
import psyco

def xx():
i = 1.0
j = 0.0
while i < 100000000.0:
j = j + i
i = i + 1
print int(j)

psyco.profile()

xx()

Neil
b***@aol.com
2004-03-02 14:49:31 UTC
Permalink
Post by Neil Hodgson
It actually compares the speed of integers large enough to need 64 bits
of precision where Fortran can use a 64 bit integer and Python uses an
unbounded integer. The test can be sped up in two ways, first by using
floating point (which is fixed length in Python) and by using Psyco. On my
Original test: 245 seconds
Using floats: 148 seconds
Using psyco with integers: 69 seconds
Using psyco with floats: 18.4 seconds
Therfore gaining a speed up of 13 times. This leads to Fortran likely
remaining 22 times faster.
from psyco.classes import *
import psyco
i = 1.0
j = 0.0
j = j + i
i = i + 1
print int(j)
psyco.profile()
xx()
Neil
Thanks. I am going to learn about Psyco. In this case, I assume that
doing the computations with floating point numbers and finally
converting the result to int gives the same values as the original
integer calculation. In other cases, integer arithmetic will need to
be done with integers to ensure correct results.

Python is supposed to be easy (and in general I agree that it is), but
your solution requires some knowledge of

(1) how integer and floating point calculations are done (which many
novices do not have)
(2) when Psycho can speed things up

and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
Paul Rubin
2004-03-02 19:59:16 UTC
Permalink
Post by b***@aol.com
Thanks. I am going to learn about Psyco. In this case, I assume that
doing the computations with floating point numbers and finally
converting the result to int gives the same values as the original
integer calculation.
This isn't obvious since the answer around 1.15*2**52 and Python
floats have 53 bits of precision.
ciw42
2004-03-03 00:44:48 UTC
Permalink
Even the most basic optimising compiler would turn your example Fortran
increment variable loop into a few basic machine code instructions operating
on hardware registers, which combined with the internal cache would keep the
entire loop inside the processor avoiding memory bandwidth issues etc. It's
not surprising you're getting such dramatic ratios, but it's hardly a real
world example.

Compiled optimised code will (should) always be significantly faster than
interpreted code, but you lose all of the benefits of using a language such
as Python in the process, so it's all rather pointless comparing languages
in this way.

I've spent over 20 years coding all manner of languages - assembly, BASIC,
COBOL, C/C++, VB, Delphi etc. and speed of execution is of little or no
concern in 99% of the projects I work on, especially these days. If it were
the be-all and end-all of development I'd still be coding in assembly.
Paul Rubin
2004-03-03 00:58:43 UTC
Permalink
Post by ciw42
Compiled optimised code will (should) always be significantly faster than
interpreted code, but you lose all of the benefits of using a language such
as Python in the process, so it's all rather pointless comparing languages
in this way.
Of course you won't lose those benefits. The result of compiling
Python to optimized native code is that the Python code will run
faster that way. In other regards, it will be the same. You keep the
benefits and gain more speed.
Neil Hodgson
2004-03-02 20:52:55 UTC
Permalink
Post by b***@aol.com
Thanks. I am going to learn about Psyco. In this case, I assume that
doing the computations with floating point numbers and finally
converting the result to int gives the same values as the original
integer calculation. In other cases, integer arithmetic will need to
be done with integers to ensure correct results.
Yes, in this case float has enough range. That is something you get to
determine with any fixed length numeric representation.
Post by b***@aol.com
Python is supposed to be easy (and in general I agree that it is), but
your solution requires some knowledge of
(1) how integer and floating point calculations are done (which many
novices do not have)
(2) when Psycho can speed things up
and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
With Fortran, you need to know how large your values are going to become.
If you increased the number of iterations in your example sufficiently,
Fortran's 64 bit integers would overflow requiring understanding of the
concept of fixed range integers. Python avoids this by using unbounded
integers. Python is oriented towards ease of use and correctness at the
expense of speed. If the speed of the Python program is inadequate then you
can take a working Python program and work on its speed. Or decide that the
speed problem needs another language.

Python does optimize integers that can be represented in 32 bits but
larger than that and unbounded integers are used. For some applications, it
would be better if Python also optimized integers that require between 32
and 64 bits.

Neil
Bob Ippolito
2004-03-02 21:18:06 UTC
Permalink
Post by Neil Hodgson
Post by b***@aol.com
Thanks. I am going to learn about Psyco. In this case, I assume that
doing the computations with floating point numbers and finally
converting the result to int gives the same values as the original
integer calculation. In other cases, integer arithmetic will need to
be done with integers to ensure correct results.
Yes, in this case float has enough range. That is something you get to
determine with any fixed length numeric representation.
Post by b***@aol.com
Python is supposed to be easy (and in general I agree that it is), but
your solution requires some knowledge of
(1) how integer and floating point calculations are done (which many
novices do not have)
(2) when Psycho can speed things up
and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
With Fortran, you need to know how large your values are going to become.
If you increased the number of iterations in your example sufficiently,
Fortran's 64 bit integers would overflow requiring understanding of the
concept of fixed range integers. Python avoids this by using unbounded
integers. Python is oriented towards ease of use and correctness at the
expense of speed. If the speed of the Python program is inadequate then you
can take a working Python program and work on its speed. Or decide that the
speed problem needs another language.
Python does optimize integers that can be represented in 32 bits but
larger than that and unbounded integers are used. For some applications, it
would be better if Python also optimized integers that require between 32
and 64 bits.
Especially on architectures that have 64 bit integer registers, but are
running on an operating system/compiler combination that uses 32 bits
for int and long (such as the PowerPC 970, on OS X). I would imagine a
similar situation exists for other processor/enivronment combinations.

-bob
Thomas Heller
2004-03-02 21:23:02 UTC
Permalink
Post by Bob Ippolito
Post by Neil Hodgson
Python does optimize integers that can be represented in 32 bits but
larger than that and unbounded integers are used. For some
applications, it would be better if Python also optimized integers
that require between 32 and 64 bits.
Especially on architectures that have 64 bit integer registers, but
are running on an operating system/compiler combination that uses 32
bits for int and long (such as the PowerPC 970, on OS X). I would
imagine a similar situation exists for other processor/enivronment
combinations.
Instead of making this is responsibility of the Python code, wouldn't
this better be a job for psyco?

BTW, does psyco run on non-x86 architectures?

Thomas
Bob Ippolito
2004-03-03 00:49:23 UTC
Permalink
Post by Thomas Heller
Post by Bob Ippolito
Post by Neil Hodgson
Python does optimize integers that can be represented in 32 bits but
larger than that and unbounded integers are used. For some
applications, it would be better if Python also optimized integers
that require between 32 and 64 bits.
Especially on architectures that have 64 bit integer registers, but
are running on an operating system/compiler combination that uses 32
bits for int and long (such as the PowerPC 970, on OS X). I would
imagine a similar situation exists for other processor/enivronment
combinations.
Instead of making this is responsibility of the Python code, wouldn't
this better be a job for psyco?
Perhaps psyco and/or Numeric/Numarray, I guess... I would imagine that
the people who would need such speed don't care where it is, so long as
it's available to them.
Post by Thomas Heller
BTW, does psyco run on non-x86 architectures?
Yes, barely.. it uses a written-in-C "microVM" that can do some things
more efficiently than Python's "monsterVM" :)

-bob
cookedm+ (David M. Cooke)
2004-03-03 02:15:19 UTC
Permalink
Post by b***@aol.com
Python is supposed to be easy (and in general I agree that it is), but
your solution requires some knowledge of
(1) how integer and floating point calculations are done (which many
novices do not have)
(2) when Psycho can speed things up
and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
It's quite easy (using F2PY) to make Python wrapper around a Fortran
routine. So if you're in a situation where Fortran is much faster,
then use it. Wrap it in Python and slap a webserver and a GUI on
that puppy, and run rings around pure Fortran code.

[Heck, I'll cry if I ever have to write any serious user interface
code in Fortran. More than my sanity is worth.]
--
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke
|cookedm(at)physics(dot)mcmaster(dot)ca
Josiah Carlson
2004-03-01 21:32:55 UTC
Permalink
Post by b***@aol.com
i = 1
j = 0
j = j + i
i = i + 1
print j
Python 93.4
Fortran 0.28
Not quite fair. The Python version does long-integer
(infinite-precision) math when i is greater than 65534 on a 32 bit
processor. You are really counting is the time to do 100,065,535
integer additions and 99934465 long-integer additions.

Furthermore, you allow fortran to optimize the loop, but you don't
optimize the python loop yourself, using those portions built-in that
are known to be fast.

print sum(xrange(100000000))
takes half-as long as the while loop on my machine, yet does the exact
same calculations.


Your Fortran definition includs a message saying "use 64-bit integers here":
integer*8 :: i,j


You are comparing apples to oranges on multiple levels. Don't.

- Josiah
Paul Rubin
2004-03-01 22:03:13 UTC
Permalink
Post by Josiah Carlson
Not quite fair. The Python version does long-integer
(infinite-precision) math when i is greater than 65534 on a 32 bit
processor.
Why is that unfair? It's showing a difference between Fortran and
Python, that Fortran supports 64-bit integers, while Python has to
resort to longs.
Post by Josiah Carlson
Furthermore, you allow fortran to optimize the loop, but you don't
optimize the python loop yourself, using those portions built-in that
are known to be fast.
Nothing stops Python from optimizing the loop. It just doesn't do so.
Again, it's an implementation vs implementation comparison, with
Fortran winning.
Michael Hudson
2004-03-02 11:26:21 UTC
Permalink
Post by Paul Rubin
Nothing stops Python from optimizing the loop. It just doesn't do so.
Again, it's an implementation vs implementation comparison, with
Fortran winning.
I haven't read the rest of this thread, but my curiosity is almost
(but not quite) piqued enough by this statement to go back and find
out why it's being said :-)

Cheers,
mwh
--
TRSDOS: Friendly old lizard. Or, at least, content to sit there
eating flies. -- Jim's pedigree of operating systems, asr
ziaran
2004-02-29 19:05:32 UTC
Permalink
Post by Bart Nessux
Just fooling around this weekend. Wrote and timed programs in C, Perl and
Python. Each Program counts to 1,000,000 and prints each number to the
console as it counts. I was a bit surprised. I'm not an expert C or Perl
programming expery, I'm most familiar with Python, but can use the others
as well.
C = 23 seconds
Python = 26.5 seconds
Perl = 34.5 seconds
-------------------------
-------------------------
#include <stdio.h>
int main(void)
{
int x = 0;
while (x < 1000000) {
printf("%d \n", x++);
}
}
-------------------------
-------------------------
#!/usr/bin/python
x = 0
x = x + 1
print x
-------------------------
-------------------------
#!/usr/bin/perl -Tw
use strict;
my $x = 0;
while($x < 1000000) {
print $x++, "\n";
}
What do you guys think of this? I don't know enough about Perl & C, and
perhaps Python, to know if this was indeed a fair test. I thought C would
do this much faster than it did. Any ideas?
Last time I checked doing simple prime number searching, C++ was 30
times faster than Python.
Simon Wittber
2004-03-03 01:20:25 UTC
Permalink
Post by b***@aol.com
and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
This is hardly surprising. *Hundreds* of man years have been spent
optimising Fortran compilers for speed.
Delaney, Timothy C (Timothy)
2004-03-03 01:06:49 UTC
Permalink
From: ciw42
Compiled optimised code will (should) always be significantly
faster than
interpreted code, but you lose all of the benefits of using a
language such
as Python in the process, so it's all rather pointless
comparing languages
in this way.
Of course, it's possible to create some *really* bad code in Python. I've just spent a day finding and fixing an algorithm which turned out to be O(n^3) in some legacy code (in-memory representation of an XML file for network topologies).

It was fine for the small, initial topologies that it was developed for. Unfortunately, it didn't scale overly well to the somewhat larger topologies I'm dealing with now ... like 21000 devices ...

Put it this way. One single function call was taking an hour to return. After doing some major refactoring - mainly sticking stuff into dictionaries as the file is parsed - that same function call now takes one second. Loading the file has been reduced from 3-4 hours down to 5 minutes.

Finding and fixing this bottleneck would have been a *lot* nastier in a lower-level language than Python. In Python however, the optimised algorithm is clear and understandable - in fact, moreso than the original code IMO ;)

Tim Delaney
Simon Wittber
2004-03-03 01:19:06 UTC
Permalink
Post by b***@aol.com
and the final result is still much slower than Fortran. For the
Fortran program, the only "trick" is the use of integer*8.
Yes. But this is hardly surprising. Hundreds of man years have been
spend optimising Fortran compilers.

Sw.
Loading...